gluster / gdeploy Goto Github PK
View Code? Open in Web Editor NEWgdeploy - an Ansible based tool to deploy GlusterFS
License: GNU General Public License v3.0
gdeploy - an Ansible based tool to deploy GlusterFS
License: GNU General Public License v3.0
When smb_enable=yes is set , there are certain volume options which needs to be set which is not present currently.
These volume options needs to be added as per admin guide.
gluster vol set test-vol1 server.allow-insecure on
gluster vol set test-vol1 stat-prefetch off
gluster vol set test-vol1 storage.batch-fsync-delay-usec 0
I created a volume using this file:
[hosts]
gluster-{1..4}
[volume]
action=create
volname=sample3
replica=yes
replica_count=2
brick_dirs=/gluster/brick2/sample3,/gluster/brick3/sample3
force=yes
Without "force=yes" this fails.
With "force=yes" the volume created looks like this:
Volume Name: sample3
Type: Distributed-Replicate
Volume ID: b3e656ea-c4eb-4852-aa4f-e4b74f29ab0b
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: gluster-1:/gluster/brick2/sample3
Brick2: gluster-1:/gluster/brick3/sample3
Brick3: gluster-2:/gluster/brick2/sample3
Brick4: gluster-2:/gluster/brick3/sample3
Brick5: gluster-3:/gluster/brick2/sample3
Brick6: gluster-3:/gluster/brick3/sample3
Brick7: gluster-4:/gluster/brick2/sample3
Brick8: gluster-4:/gluster/brick3/sample3
Options Reconfigured:
performance.readdir-ahead: on
This is not what I want...
I want to create a 4x2 volume where of course the second brick in a replica is on a different server. But how do you do this with gdeploy?
Following are the new steps which needs to be taken care with the current script:
The 'Ansible' link in the index.rst file behaves as a relative path.
If the config file has volume section, along with 'volume options' mentioned, set all the volume options and start the volume.
Similar to #48 .
in https://github.com/gluster/gdeploy/blob/master/modules/vg.py the documentation is talking about options vg_pattern
and vg_name
, which is wrong.
vg_pattern
is not used in the code and throws an error and vg_name
is actually vgname
Using branch 2.0:
ls -l /etc/yum.repos.d/ |less
rw-r--r--. 1 root root 0 Apr 3 14:34 _
-rw-r--r--. 1 root root 0 Apr 3 14:34 -
-rw-r--r--. 1 root root 0 Apr 3 14:34 :
-rw-r--r--. 1 root root 0 Apr 3 14:34 4
-rw-r--r--. 1 root root 43 Apr 3 14:34 4.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 6
-rw-r--r--. 1 root root 43 Apr 3 14:34 6.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 7
-rw-r--r--. 1 root root 43 Apr 3 14:34 7.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 8
-rw-r--r--. 1 root root 43 Apr 3 14:34 8.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 a
-rw-r--r--. 1 root root 43 Apr 3 14:34 a.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 b
-rw-r--r--. 1 root root 43 Apr 3 14:34 b.repo
-rw-r--r--. 1 root root 1664 Dec 9 04:59 CentOS-Base.repo
-rw-r--r--. 1 root root 1309 Dec 9 04:59 CentOS-CR.repo
-rw-r--r--. 1 root root 649 Dec 9 04:59 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root 290 Dec 9 04:59 CentOS-fasttrack.repo
-rw-r--r--. 1 root root 630 Dec 9 04:59 CentOS-Media.repo
-rw-r--r--. 1 root root 1331 Dec 9 04:59 CentOS-Sources.repo
-rw-r--r--. 1 root root 1952 Dec 9 04:59 CentOS-Vault.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 d
-rw-r--r--. 1 root root 43 Apr 3 14:34 d.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 e
-rw-r--r--. 1 root root 43 Apr 3 14:34 e.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 f
-rw-r--r--. 1 root root 43 Apr 3 14:34 f.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 g
-rw-r--r--. 1 root root 43 Apr 3 14:34 g.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 h
-rw-r--r--. 1 root root 43 Apr 3 14:34 h.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 i
-rw-r--r--. 1 root root 43 Apr 3 14:34 i.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 l
-rw-r--r--. 1 root root 123 Apr 3 14:33 local-ovirt.repo
-rw-r--r--. 1 root root 43 Apr 3 14:34 l.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 n
-rw-r--r--. 1 root root 43 Apr 3 14:34 n.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 o
-rw-r--r--. 1 root root 43 Apr 3 14:34 o.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 p
-rw-r--r--. 1 root root 43 Apr 3 14:34 p.repo
-rw-r--r--. 1 root root 0 Apr 3 14:34 r
-rw-r--r--. 1 root root 43 Apr 3 14:34 _.repo
-rw-r--r--. 1 root root 43 Apr 3 14:34 -.repo
-rw-r--r--. 1 root root 43 Apr 3 14:34 :.repo
-rw-r--r--. 1 root root 43 Apr 3 14:34 r.repo
...
If there are multiple bricks in one server and brick path is not provided in conf file then volume gets created using all the bricks from single server.
If brick path is provided then the peer probe itself fails.
On the tip of master I was able to reproduce a failure with generated playbooks where VDSM packages had been installed. I was using gdeploy to set up oVirt+Gluster. VDSM and Ansible both vend a module named mount.py, and the VDSM version trumps the Ansible version while the Ansible version is expected by gdeploy (e.g., in gdeploy/playbooks/mount.yml). The error presenting when the VDSM module is discovered first is like "module (mount) is missing interpreter line" which is concurrent with the fact that the VDSM module named mount.py lacks a shebang. I worked around this by copying the Ansible mount module to gdeploy/modules.
Remove the [service3] defintion from gdeploy/examples/hc.conf
[service3]
action=start
service=vdsmd
While adding more channels, gdeploy adds them one by one, which takes more time.
subscription-manager repos --enable=x
subscription-manager repos --enable=y
subscription-manager repos --enable=z
Rather gdeploy could add channels in one shot
subscription-manager repos --enable=x --enable=y --enable=z
The Ansible team has released v2.2.0.0.
Would you please use 2.2.0.0 in requirements.txt
?
(It would also be great to test with the upcoming v2.2.1RC1, to ensure that works, since the Ansible team is probably going to issue that release today.)
Regression:
PLAY [gluster_servers] ********************************************************
TASK: [Create logical volume named metadata] **********************************
<192.168.200.2> REMOTE_MODULE lv action=create lvname=metadata compute=rhs lvtype='thick' snapshot_reserve="0" vgname=GLUSTER_vg1
failed: [192.168.200.2] => (item=GLUSTER_vg1) => {"failed": true, "item": "GLUSTER_vg1"}
msg: --size may not be zero.
Run `lvcreate --help' for more information.
...ignoring
<192.168.200.2> REMOTE_MODULE lv action=create lvname=metadata compute=rhs lvtype='thick' snapshot_reserve="0" vgname=GLUSTER_vg2
failed: [192.168.200.2] => (item=GLUSTER_vg2) => {"failed": true, "item": "GLUSTER_vg2"}
msg: --size may not be zero.
Run `lvcreate --help' for more information.
I'm running with latest code - f2f15d4
This line in the code,
https://github.com/gluster/gdeploy/blob/master/modules/backend_reset.py#L35
"self.pvs = literal_eval(self.validated_params('pvs'))"
Seems to actually throw an exception and hence it enters the except block and marks self.pvs as None. Hence, none of the pvs are really removed. IOW, comma separated string to dict as passed to literal_eval does not seem to do the trick, and I do not know enough python to suggest alternatives.
Additionally, if a YAML input just had pvs in it, this function does nothing, due to the above bug (possibly).
To reproduce, just have this in the YAML
[hosts]
127.0.0.5
[backend-reset:127.0.0.5]
pvs=sdb,sdc,sdd
and execute gdeploy -c
The end result is that none of the pvs,lvs,vgs are removed.
If I add a vgs to the above that is valid and exists, then that (and its downward depdendencis lvs etc.) is removed successfully.
update_file provides options to copy, edit, add.
delete a file and deleting a line in a file would also be helpful
gdeploy cannot be used when root login in disabled. Disabling root login is a common security measure that many sites enforce.
When however password less login for a non root user is available, and this user can do sudo then gdeploy ought to be able to work, as ansible can work in such an environment.
All that is probalby needed is replace:
"remote_user: root"
with
"become: true"
in all the playbooks.
I tried this myself, and like this I cold use gdeploy with a non root user.
It gives errors randomly
TASK: [Mount the volumes] *****************************************************
failed: [10.70.42.11] => (item=/mnt/smb) => {"failed": true, "item": "/mnt/smb"}
msg: Error mounting /mnt/smb: Mount failed. Please check the log file for more details.
FATAL: all hosts have already failed -- aborting
gdeploy should be able to provision gluster bricks, using tools like: Blivet, libstoragemanager. While provisioning following best practices from gluster admin guide has to be considered.
LVM layer:
Physical Volume creation:
$ pvcreate --dataalignment <alignment_value> <disk>
where alignment_value :
- For JBODS: 256k
- For H/W RAID: RAID stripe unit size * Nos of data disks (nos of data disks depends upon the raid type)
Volume Group creation:
For RAID disks:
$ vgcreate --physicalextentsize <extent_size> VOLGROUP <physical_volume>
where extent_size = RAID stripe unit size * Nos of data disks (nos of data disks depends upon the raid type)
For JBODS:
$ vgcreate VOLGROUP <physical_volume>
Thin Pool creation:
$ lvcreate --thinpool VOLGROUP/thin_pool --size <pool_size> --chunksize <chunk_size> --poolmetadatasize <meta_size> --zero n
Where:
- meta_size: 16 GiB recomended, if its a concern atleast 0.5% of pool_size
- chunk_size:
i. For JBOD: use a thin pool chunk size of 256 KiB.
ii. For RAID 6: stripe_unit size * number of data disks must be B/w 1Mib and 2Mib (preferably close to 1)
iii. For RAID 10: thin pool chunk size of 256 KiB
NOTE: if we need multiple bricks on a single H/w device then create multiple Thin pools from a single VG.
Thin LV creation:
$ vcreate --thin --name LV_name --virtualsize LV_size VOLGROUP/thin_pool
XFS Layer:
For RAID 6: SU= SW=number of data disks.
Example :
$ mkfs.xfs other_options -d su=128k,sw=10 device_name
For RAID 10 and JBODS: this can be omitted default is fine
For all types:
default is 4k for better performance have a greater value like 8192 use "-n sixe=" for setting this.
Example :
$ mkfs.xfs -f -i size=512 -n size=8192 -d su=128k,sw=10 logical volume meta-data=/dev/mapper/gluster-brick1 isize=512 agcount=32, agsize=37748736 blks
$ mount -t xfs -o inode64,noatime <logical volume> <mount point>
It would be great if gdeploy could take free disk name as input from user and provision that to a brick that can be used by gluster volume, automatically figuring out the details of the disk and provisioning it as per the best practices using above mentioned tools.
It's not mentioned anywhere in the documentation and gdeploy does not ensure firewalld is up and running. If it isn't, gdeploy fails.
Surely there's a more efficient way to deploy Gluster:
[root@lago_basic_suite_hc_host0 ~]# grep "Stopped GlusterFS" /var/log/messages
Apr 3 14:42:40 lago_basic_suite_hc_host0 systemd: Stopped GlusterFS, a clustered file-system server.
Apr 3 14:42:58 lago_basic_suite_hc_host0 systemd: Stopped GlusterFS, a clustered file-system server.
Apr 3 14:43:20 lago_basic_suite_hc_host0 systemd: Stopped GlusterFS, a clustered file-system server.
Apr 3 14:43:42 lago_basic_suite_hc_host0 systemd: Stopped GlusterFS, a clustered file-system server.
could you please add this option
As of now, the configuration file assumes that selinux is disabled by default and doesn't provides selinux labels for the bricks by default.
It would be highly recommended that selinux is always enabled.
gdeploy should go with the default value of selinux as enabled
At least on CentOS, to complete the pip install requirements, one needs to install both gcc and python-devel
'yum install -y gcc python-devel' works.
vers: gdeploy-2.1.dev1-9.noarch.rpm
When smb_enable=yes , when the glusterd vol file is edited and glusterd is restarted there needs to be a sleep before we mount the volume as we are immediately mounting so it fails sometimes.
Even if gdeploy fails, it seems like the return code for it is zero.
As of now gluster volume is being created using hostname specified under [hosts]. But there can be a scenario (like in the case of geo-rep) where we want to access hosts using names specified under [hosts] but want to create gluster volume using different interface.
For example using gdeploy we craeted and got this:
Volume Name: mastervol
Type: Distribute
Volume ID: ce502594-9126-45e7-b899-b8f600aadb9c
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gprfs037.sbu.lab.eng.bos.redhat.com:/gluster/brick/brick
Brick2: gprfs038.sbu.lab.eng.bos.redhat.com:/gluster/brick/brick
Brick3: gprfs039.sbu.lab.eng.bos.redhat.com:/gluster/brick/brick
Brick4: gprfs040.sbu.lab.eng.bos.redhat.com:/gluster/brick/brick
Options Reconfigured:
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
In the above example the Bricks are coming from 1Gige interface defined under [hosts] but we want a configuration where we define interface, like in example below we manually created volume using 10Gige interface:
Volume Name: mastervol
Type: Distribute
Volume ID: ce502594-9126-45e7-b899-b8f600aadb9c
Status: Started
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gprfs037-10ge:/gluster/brick/brick
Brick2: gprfs038-10ge:/gluster/brick/brick
Brick3: gprfs039-10ge:/gluster/brick/brick
Brick4: gprfs040-10ge:/gluster/brick/brick
Options Reconfigured:
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
While creating the thin volume with 9 TB disk, we are getting configuration like following
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
GEOREP_lv GEOREP_vg Vwi-a-t--- 9.08t GEOREP_pool 0.00
GEOREP_pool GEOREP_vg twi-aot--- 16.00g 0.00 0.01
home rhel_gprfs029 -wi-ao---- 391.07g
root rhel_gprfs029 -wi-ao---- 50.00g
swap rhel_gprfs029 -wi-ao---- 23.62g
above means that we have ~9TB virtual size and actual size of pool is 16GB on disk.
We should see something like following:-
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
pool gluster_brick_ twi-aot--- 8.77t 0.02 0.02
thinlv gluster_brick_ Vwi-aot--- 8.77t pool 0.02
home rhel_gprfs029 -wi-ao---- 391.07g
root rhel_gprfs029 -wi-ao---- 50.00g
swap rhel_gprfs029 -wi-ao---- 23.62g
Where both virtual and pool size matches.
I ran sudo ANSIBLE_LIBRARY=:/root/gdeploy/modules/ GDEPLOY_TEMPLATES=/root/gdeploy gdeploy -vv --trace -c /root/gdeploy-ovirt-gluster.conf
on the tip of master.
$ git show
commit 1a172b2389a34757782be7ec23e75845de02a3df
Merge: c492ae0 2ef8204
Author: Sachidananda Urs <[email protected]>
Date: Thu Nov 17 20:56:22 2016 +0530
Merge pull request #219 from gluster-deploy/master
Do not disable stat-prefetch as part of samba setup
<...snip...>
TASK [Reloads the firewall] ****************************************************
task path: /tmp/tmpO2H0K5/firewalld-ports-op.yml:15
changed: [hyp006.example.com] => {"changed": true, "cmd": "firewall-cmd --reload", "delta": "0:00:01.005358", "end": "2016-11-18 13:00:56.112375", "rc": 0, "start": "2016-11-18 13:00:55.107017", "stderr": "", "stdout": "success", "stdout_lines": ["success"], "warnings": []}
changed: [hyp004.example.com] => {"changed": true, "cmd": "firewall-cmd --reload", "delta": "0:00:01.696034", "end": "2016-11-18 13:00:56.423115", "rc": 0, "start": "2016-11-18 13:00:54.727081", "stderr": "", "stdout": "success", "stdout_lines": ["success"], "warnings": []}
changed: [hyp005.example.com] => {"changed": true, "cmd": "firewall-cmd --reload", "delta": "0:00:01.080811", "end": "2016-11-18 13:00:56.196850", "rc": 0, "start": "2016-11-18 13:00:55.116039", "stderr": "", "stdout": "success", "stdout_lines": ["success"], "warnings": []}
PLAY RECAP *********************************************************************
hyp004.example.com : ok=3 changed=1 unreachable=0 failed=0
hyp005.example.com : ok=3 changed=1 unreachable=0 failed=0
hyp006.example.com : ok=3 changed=1 unreachable=0 failed=0
Traceback (most recent call last):
File "/bin/gdeploy", line 5, in <module>
pkg_resources.run_script('gdeploy==2.0.1', 'gdeploy')
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1455, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/EGG-INFO/scripts/gdeploy", line 228, in <module>
main(sys.argv[1:])
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/EGG-INFO/scripts/gdeploy", line 207, in main
call_features()
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeploylib/call_features.py", line 36, in call_features
map(get_feature_dir, Global.sections)
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeploylib/call_features.py", line 83, in get_feature_dir
section_dict, yml = feature_call(section_dict)
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeployfeatures/script/script.py", line 36, in script_execute
if Global.trace:
NameError: global name 'Global' is not defined
GDeploy python code is not Python 3 ready. Please make it Python 3 ready
This line, https://github.com/gluster/gdeploy/blob/master/gdeploy_setup.sh#L11 reads,
echo "export ANSIBLE_LIBRARY=$ANSIBLE_LIBRARY:'$DIR/modules/'" >> ~/.bashrc
2 problems with this,
if $ANSIBLE_LIBRARY is not set, this generates,
DIR='/test'; echo "export ANSIBLE_LIBRARY=$ANSIBLE_LIBRARY:'$DIR/modules/'"
export ANSIBLE_LIBRARY=:'/test/modules/'
There is an unwanted ":" at the head of the export (this may not cause a problem)
There are unwanted ' in the ANSIBLE_LIBRARY variable. This caused some problems in my CentOS 7.2 default setup, as during execution of any gdeply conf, it was unable to find the required modules. Post removing the leading and trailing ' characters this worked as expected.
Request that the script be enhanced to address these issues.
Ansible v2.2.1.0 was released today. Given that this fixes CVE-2016-9587, I recommend updating gdeploy's requirements.txt to depend on this version.
There is additional setup needed to mount the volume using smb.
# /usr/bin/gdeploy -c gdeploy-ovirt-gluster.conf
Traceback (most recent call last):
File "/usr/bin/gdeploy", line 5, in <module>
pkg_resources.run_script('gdeploy==2.0.1', 'gdeploy')
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 540, in run_script
self.require(requires)[0].run_script(script_name, ns)
File "/usr/lib/python2.7/site-packages/pkg_resources.py", line 1455, in run_script
execfile(script_filename, namespace, namespace)
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/EGG-INFO/scripts/gdeploy", line 29, in <module>
from gdeploylib import *
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeploylib/__init__.py", line 8, in <module>
from call_features import call_features
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeploylib/call_features.py", line 25, in <module>
import gdeployfeatures
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeployfeatures/__init__.py", line 11, in <module>
import nfs_ganesha
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeployfeatures/nfs_ganesha/__init__.py", line 1, in <module>
import nfs_ganesha
File "/usr/lib/python2.7/site-packages/gdeploy-2.0.1-py2.7.egg/gdeployfeatures/nfs_ganesha/nfs_ganesha.py", line 104
if Global.trace:
^
IndentationError: unindent does not match any outer indentation level
According to https://github.com/gluster/gdeploy/blob/master/modules/lv.py#L268
the chunksize calculation for thin-pool in case of RAID 6 should be 256K - this is wrong.
Please consult https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/Brick_Configuration.html
For testing it is important to the setup the environment quickly. Sometimes we would like to tear down the setup and set it up again. In such cases having a feature to team down the setup with given config file would be handy.
locking-scheme=granular to be added to standard volume options in the sample file gdeploy/examples/hc.conf
Unmount of volume after doing setfacl fails with following error.
TASK: [Unmount the volumes] ***************************************************
failed: [10.70.42.11] => (item=/mnt/smb) => {"failed": true, "item": "/mnt/smb"}
msg: missing required arguments: src,fstype
In an environment where we have configuration as follows:
nic1 : 10.x.x.x
nic2 : 192.168.121.x (nodes, ctdb volume to be on these nics)
nic3 : 192.168.100.x (VIP, ctdb public ip to be on these nics)
The hosts entries that we mention in conf file , it does ssh via that host and create volumes using the same.
In this case we need to have entry for nodes file as well where we can mention the IP's and these IP's should be used for creating ctdb vol as well.
Example: http://jenkins.lab.eng.blr.redhat.com/rhsc/hc/ ...
Hi,
gdeploy should provide an option to abort the configuration, when there is a failure encountered.
I agree that some use case, may require gdeploy to continue even when there is a failure.
But certain cases doesn't seem valid when gdeploy continues post failure, and ends the system in a bad state.
For example, when the installation of vdsm fails due to some reasons, continuing with the setup process, would make all the lvm commands to hang ( this is the known issue )
Take another case, when vg create failed, but gdeploy continues to create thin_pool, thin_lv, etc, doesn't seem meaningful.
So the solution is to have a option in each of the section like - "abort_on_failure", where the user can customize the behavior. By default this option could be enabled, so gdeploy stops execution on encoutering a failure, which is what most of the usecase expect.
volume requires the following options
keys - user.cifs, nfs.disable
values - off, enable
These options are required on all the volumes in hc.conf
After setting up backend, the subscription module fails with following error.
TASK: [Register with subscription-manager with username and password] *********
ERROR: error while evaluating conditional: yes
Ver : gdeploy-2.1.dev1-11.noarch
gdeploy fails if lvm2 is not installed on the hosts (pvcreate fails)
The option to be added /etc/glusterfs/glusterd.vol file has to be done on all the nodes of cluster.
Currently it is doing only on one node.
gdeploy-2.1.dev1-11.noarch
Ver:gdeploy-2.1.dev1-9.noarch.rpm
If mount point is already present, it is not skipping and failing with following error:
[Create the dir to mount the volume, skips if present] ******************
failed: [10.70.47.64] => (item=/mnt/smb) => {"failed": true, "item": "/mnt/smb", "parsed": false}
Traceback (most recent call last):
OSError: [Errno 17] File exists: '/mnt/smb'
When you are automatically using gdeploy, it should be able to put its logs (stdout/stderr) to a log file.
Today you can enable '-vv' but not a file where to output this for post inspection.
gdeploy has a config to ignore the erros. we have to specify 'ignore_xxx_erros=yes' if you want to ignore erros and 'ignore_xxx_erros=no' if you don't want to ignore errors. Here default is 'ignore_xxx_erros=yes', which means gdeploy will ignore all the errors by default unless we specify 'ignore_xxx_erros=no' in each section. I feel this is confusing. It is better not to ignore errors by default, and when user wants to ignore errors we can expect him to specify 'ignore_xxx_erros=yes'. Currently I am adding 'ignore_xxx_erros=no' to all the sections.
Hi, I noted as I was poring through the playbooks that a generic clear text password is created in the set-pcs-auth-passwd.yml via shell command.
This might be better improved by moving this variable up into the configuration file read in by gdeploy as some target environments/users would prefer to use something a little more secure at the onset.
Side note - can we also avoid using "root" for the playbooks? Ansible now supports becoming a user with elevated privileges http://docs.ansible.com/ansible/become.html
ver: gdeploy-2.1.dev1-9.noarch.rpm
On running the conf file with cifs mount following errors are seen which is related to nfs even though nfs is not enabled and running on the setup.
TASK: [Restart rpc-statd service] *********************************************
failed: [10.70.47.64] => {"failed": true}
msg: Job for rpc-statd.service failed because the control process exited with error code. See "systemctl status rpc-statd.service" and "journalctl -xe" for details.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.