Giter VIP home page Giter VIP logo

pcs's Introduction

PCS - Pacemaker/Corosync Configuration System

Pcs is a Corosync and Pacemaker configuration tool. It permits users to easily view, modify and create Pacemaker based clusters. Pcs contains pcsd, a pcs daemon, which operates as a remote server for pcs.


Pcs Branches

  • main
    • This is where pcs-0.12 lives.
    • Clusters running Pacemaker 3.x on top of Corosync 3.x are supported.
    • The main development happens here.
  • pcs-0.11
    • Clusters running Pacemaker 2.1 on top of Corosync 3.x are supported.
    • This branch is in maintenance mode - bugs are being fixed but only a subset of new features lands here.
  • pcs-0.10
    • Clusters running Pacemaker 2.0 on top of Corosync 3.x are supported.
    • Pacemaker 2.1 is supported, if it is compiled with --enable-compat-2.0 option.
    • This branch is no longer maintained.
  • pcs-0.9
    • Clusters running Pacemaker 1.x on top of Corosync 2.x or Corosync 1.x with CMAN are supported.
    • This branch is no longer maintained.

Dependencies

These are the runtime dependencies of pcs and pcsd:

  • python 3.12+
  • python3-cryptography
  • python3-dateutil 2.7.0+
  • python3-lxml
  • python3-pycurl
  • python3-setuptools
  • python3-setuptools_scm
  • python3-pyparsing
  • python3-tornado 6.1.0+
  • dacite
  • ruby 3.3.0+
  • killall (package psmisc)
  • corosync 3.x
  • pacemaker 3.x

Installation from Source

Apart from the dependencies listed above, these are also required for installation:

  • python development files (packages python3-devel, python3-setuptools, python3-setuptools_scm, python3-wheel)
  • ruby development files (package ruby-devel)
  • rubygems
  • rubygem bundler (package rubygem-bundler or ruby-bundler or bundler)
  • autoconf, automake
  • gcc
  • gcc-c++
  • FFI development files (package libffi-devel or libffi-dev)
  • printf (package coreutils)
  • redhat-rpm-config (if you are using Fedora)
  • wget (to download bundled libraries)

During the installation, all required rubygems are automatically downloaded and compiled.

To install pcs and pcsd run the following in terminal:

./autogen.sh
./configure
# alternatively './configure --enable-local-build' can be used to also download
# missing dependencies
make
make install

If you are using GNU/Linux with systemd, it is now time to:

systemctl daemon-reload

Start pcsd and make it start on boot:

systemctl start pcsd
systemctl enable pcsd

Packages

Currently this is built into Fedora, RHEL, CentOS and Debian and its derivates. It is likely that other Linux distributions also contain pcs packages.


Quick Start

  • Authenticate cluster nodes

    Set the same password for the hacluster user on all nodes.

    passwd hacluster

    To authenticate the nodes, run the following command on one of the nodes (replacing node1, node2, node3 with a list of nodes in your future cluster). Specify all your cluster nodes in the command. Make sure pcsd is running on all nodes.

    pcs host auth node1 node2 node3 -u hacluster
  • Create a cluster

    To create a cluster run the following command on one node (replacing cluster_name with a name of your cluster and node1, node2, node3 with a list of nodes in the cluster). --start and --enable will start your cluster and configure the nodes to start the cluster on boot respectively.

    pcs cluster setup cluster_name node1 node2 node3 --start --enable
  • Check the cluster status

    After a few moments the cluster should startup and you can get the status of the cluster.

    pcs status
  • Add cluster resources

    After this you can add stonith agents and resources:

    pcs stonith create --help

    and

    pcs resource create --help

Further Documentation

ClusterLabs website is an excellent place to learn more about Pacemaker clusters.


Inquiries

If you have any bug reports or feature requests please feel free to open a github issue on the pcs project.

Alternatively you can use ClusterLabs users mailinglist which is also a great place to ask Pacemaker clusters related questions.

pcs's People

Contributors

adamwill avatar akanouras avatar alessandro-barbieri avatar andyprice avatar btravouillon avatar ctrlzmaster avatar daaitudian avatar davidvossel avatar f16shen avatar fabbione avatar feist avatar hideoyamauchi avatar idevat avatar jfriesse avatar jnpkrn avatar kirvedx avatar kmalyjur avatar loz-hurst avatar lucaskanashiro avatar marxsk avatar mbaldessari avatar mirecheck avatar mtasaka avatar nrwahl2 avatar ondrejmular avatar pederrr avatar robbmanes avatar roidelapluie avatar tomjelinek avatar vvidic avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pcs's Issues

pcs resource start <resource> and nothing happens

So most of the startup within the cluster is an async operation, but I have seen many times where I attempt to start up a pcs resource and I get zero information in the logs. If I run pcs resource it will still say stopped, multiple start attempts and nothing happens.

I have attemped a few things without very little change. What can I do in a situation like this? Restarting the entire cluster so far has been my solution which is not ideal. What troubleshooting can I do here (still learning the ropes with the new command line suite which is so far super easy in comparison to crm!)

pcs resource failcount reset
pcs resource cleanup
pcs resource update

Adding more rings

The default pcs cluster setup adds a ring and it does it on the interface where the default gateway is, what if you want to specify a ring to use a different NIC (private network for cluster heartbeats) or add more rings will there be a syntax for that?

[Question] About a method to move pcsd on RHEL6.

We are going to move pcsd on RHEL6.

Our environment put corosync2.x and Pacemaker1.1.13 together.

We do not seem to be able to move pcsd by the combination of us on RHEL6 as far as I look at a manual and the source of RH.
We put pcsd in RHEL6 environment, but service pcsd start is not really completed.
(We did not install cman.)

We want to know the details of the method to move pcsd in our environment.
Or, by this combination, can we not move pcsd?

If there is a link serving as a reference, please tell us.

Best Regards,
Hideo Yamauchi.

python scripts

Hello,

I noticed that the python scripts in pcs are always called using "python $scriptname".
This does not work on distributions and platforms where python is actually python3, not python2.
Do you mind changing the scripts to be called using python2, not just python?
For example on Arch Linux this is a problem and I can't package pcs for that distribution
because of that problem

Kind regards,
Thermi

[bug] pcs config shows nothing

Hello,

os: centos 6.6
pcs: pcs-0.9.123-9.0.1.el6.centos.2.x86_64
cman is not being used

pcs config

Cluster Name:
Traceback (most recent call last):
File "/usr/sbin/pcs", line 138, in
main(sys.argv[1:])
File "/usr/sbin/pcs", line 132, in main
cluster.print_config()
File "/usr/lib/python2.6/site-packages/pcs/cluster.py", line 890, in print_config
status.nodes_status(["config"])
File "/usr/lib/python2.6/site-packages/pcs/status.py", line 65, in nodes_status
corosync_nodes = utils.getNodesFromCorosyncConf()
File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 310, in getNodesFromCorosyncConf
dom = parse(settings.cluster_conf_file)
File "/usr/lib64/python2.6/xml/dom/minidom.py", line 1918, in parse
return expatbuilder.parse(file)
File "/usr/lib64/python2.6/xml/dom/expatbuilder.py", line 924, in parse
result = builder.parseFile(fp)
File "/usr/lib64/python2.6/xml/dom/expatbuilder.py", line 211, in parseFile
parser.Parse("", True)
xml.parsers.expat.ExpatError: no element found: line 1, column 0

shouldn't it output config with existing corosync.conf ?

Resources stuck in failed state

My cluster resources are stuck in a failed state. When I do a debug-start things seem to work fine, but the status reflects otherwise. When I do a pcs resource cleanup or pcs resource failcount reset, they don't seem to do anything useful. Also, it appears that the constraints seem to have no effect on how and where resources start.

OS: CentOS7
pcs: 0.9.137
pacemaker: 1.1.12
corosync: 2.3.4

Any help would be greatly appreciated!

pcs cluster cib

<cib crm_feature_set="3.0.9" validate-with="pacemaker-2.3" epoch="202" num_updates="19" admin_epoch="0" cib-last-written="Thu Apr 23 19:32:53 2015" have-quorum="1" dc-uuid="2">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.12-a14efad"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="serviced"/>
        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1429807927"/>
        <nvpair id="cib-bootstrap-options-resource-stickiness" name="resource-stickiness" value="100"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="ip-10-111-23-197.zenoss.loc"/>
      <node id="2" uname="ip-10-111-23-78.zenoss.loc"/>
    </nodes>
    <resources>
      <group id="svcd-group">
        <primitive class="ocf" id="outbound-ip" provider="heartbeat" type="IPaddr2">
          <instance_attributes id="outbound-ip-instance_attributes">
            <nvpair id="outbound-ip-instance_attributes-ip" name="ip" value="199.166.22.4"/>
            <nvpair id="outbound-ip-instance_attributes-cidr_netmask" name="cidr_netmask" value="24"/>
            <nvpair id="outbound-ip-instance_attributes-nic" name="nic" value="eth0"/>
          </instance_attributes>
          <operations>
            <op id="outbound-ip-start-timeout-20s" interval="0s" name="start" timeout="20s"/>
            <op id="outbound-ip-stop-timeout-20s" interval="0s" name="stop" timeout="20s"/>
            <op id="outbound-ip-monitor-interval-30s" interval="30s" name="monitor"/>
          </operations>
        </primitive>
        <primitive class="ocf" id="svcd-daemon" provider="zenoss" type="serviced">
          <instance_attributes id="svcd-daemon-instance_attributes">
            <nvpair id="svcd-daemon-instance_attributes-ipaddr" name="ipaddr" value="199.166.22.4"/>
          </instance_attributes>
          <operations>
            <op id="svcd-daemon-start-timeout-60" interval="0s" name="start" timeout="60"/>
            <op id="svcd-daemon-stop-timeout-60" interval="0s" name="stop" timeout="60"/>
            <op id="svcd-daemon-monitor-interval-20" interval="20" name="monitor" timeout="30"/>
          </operations>
        </primitive>
      </group>
    </resources>
    <constraints>
      <rsc_colocation id="colocation-svcd-daemon-outbound-ip-INFINITY" rsc="svcd-daemon" score="INFINITY" with-rsc="outbound-ip"/>
    </constraints>
    <rsc_defaults>
      <meta_attributes id="rsc_defaults-options">
        <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="200"/>
      </meta_attributes>
    </rsc_defaults>
  </configuration>
  <status>
    <node_state id="2" uname="ip-10-111-23-78.zenoss.loc" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
      <transient_attributes id="2">
        <instance_attributes id="status-2">
          <nvpair id="status-2-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-2-last-failure-svcd-daemon" name="last-failure-svcd-daemon" value="1429807653"/>
          <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="2">
        <lrm_resources>
          <lrm_resource id="svcd-daemon" type="serviced" class="ocf" provider="zenoss">
            <lrm_rsc_op id="svcd-daemon_last_0" operation_key="svcd-daemon_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="8:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:7;8:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="40" rc-code="7" op-status="0" interval="0" last-run="1429817598" last-rc-change="1429817598" exec-time="23" queue-time="0" op-digest="fd2d25ebb470945a1fe92d4f1b39995c" on_node="ip-10-111-23-78.zenoss.loc"/>
          </lrm_resource>
          <lrm_resource id="outbound-ip" type="IPaddr2" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="outbound-ip_last_failure_0" operation_key="outbound-ip_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="7:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:0;7:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="36" rc-code="0" op-status="0" interval="0" last-run="1429817598" last-rc-change="1429817598" exec-time="35" queue-time="0" op-digest="622a004bf76f86814c0a46eda903dbbd"/>
            <lrm_rsc_op id="outbound-ip_last_0" operation_key="outbound-ip_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="7:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:0;7:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="36" rc-code="0" op-status="0" interval="0" last-run="1429817598" last-rc-change="1429817598" exec-time="35" queue-time="0" op-digest="622a004bf76f86814c0a46eda903dbbd" on_node="ip-10-111-23-78.zenoss.loc"/>
            <lrm_rsc_op id="outbound-ip_monitor_30000" operation_key="outbound-ip_monitor_30000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="8:37:0:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:0;8:37:0:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="41" rc-code="0" op-status="0" interval="30000" last-rc-change="1429817599" exec-time="34" queue-time="0" op-digest="81b7a08cdc81ad4a965e95c21354bfdd" on_node="ip-10-111-23-78.zenoss.loc"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
    <node_state id="1" uname="ip-10-111-23-197.zenoss.loc" crmd="online" crm-debug-origin="do_update_resource" in_ccm="true" join="member" expected="member">
      <transient_attributes id="1">
        <instance_attributes id="status-1">
          <nvpair id="status-1-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-1-last-failure-svcd-daemon" name="last-failure-svcd-daemon" value="1429817599"/>
          <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/>
          <nvpair id="status-1-fail-count-svcd-daemon" name="fail-count-svcd-daemon" value="INFINITY"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="1">
        <lrm_resources>
          <lrm_resource id="svcd-daemon" type="serviced" class="ocf" provider="zenoss">
            <lrm_rsc_op id="svcd-daemon_last_failure_0" operation_key="svcd-daemon_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="1:37:0:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:6;1:37:0:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="37" rc-code="6" op-status="0" interval="0" last-run="1429817599" last-rc-change="1429817599" exec-time="21" queue-time="0" op-digest="fd2d25ebb470945a1fe92d4f1b39995c"/>
            <lrm_rsc_op id="svcd-daemon_last_0" operation_key="svcd-daemon_stop_0" operation="stop" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="1:37:0:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:6;1:37:0:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="37" rc-code="6" op-status="0" interval="0" last-run="1429817599" last-rc-change="1429817599" exec-time="21" queue-time="0" op-digest="fd2d25ebb470945a1fe92d4f1b39995c" on_node="ip-10-111-23-197.zenoss.loc"/>
          </lrm_resource>
          <lrm_resource id="outbound-ip" type="IPaddr2" class="ocf" provider="heartbeat">
            <lrm_rsc_op id="outbound-ip_last_0" operation_key="outbound-ip_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.9" transition-key="4:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" transition-magic="0:7;4:34:7:ae3f1f54-93af-4f51-8f33-2e11ffd980c4" call-id="32" rc-code="7" op-status="0" interval="0" last-run="1429817598" last-rc-change="1429817598" exec-time="33" queue-time="0" op-digest="622a004bf76f86814c0a46eda903dbbd" on_node="ip-10-111-23-197.zenoss.loc"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
  </status>
</cib>

pcs status

Cluster name: serviced
Last updated: Thu Apr 23 19:36:38 2015
Last change: Thu Apr 23 19:32:53 2015
Stack: corosync
Current DC: ip-10-111-23-78.zenoss.loc (2) - partition with quorum
Version: 1.1.12-a14efad
2 Nodes configured
2 Resources configured


Online: [ ip-10-111-23-197.zenoss.loc ip-10-111-23-78.zenoss.loc ]

Full list of resources:

 Resource Group: svcd-group
     outbound-ip    (ocf::heartbeat:IPaddr2):   Started ip-10-111-23-78.zenoss.loc 
     svcd-daemon    (ocf::zenoss:serviced): FAILED ip-10-111-23-197.zenoss.loc (unmanaged) 

Failed actions:
    svcd-daemon_stop_0 on ip-10-111-23-197.zenoss.loc 'not configured' (6): call=37, status=complete, exit-reason='none', last-rc-change='Thu Apr 23 19:33:19 2015', queued=0ms, exec=21ms
    svcd-daemon_stop_0 on ip-10-111-23-197.zenoss.loc 'not configured' (6): call=37, status=complete, exit-reason='none', last-rc-change='Thu Apr 23 19:33:19 2015', queued=0ms, exec=21ms


PCSD Status:
  ip-10-111-23-197.zenoss.loc: Online
  ip-10-111-23-78.zenoss.loc: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Does not have a way to update a clone

When a clone resource is created, there does not appear to be a way to update any of its parameters.

Attempting to use pcs resource update to do it results in an error:

# pcs resource show --all
 Resource: app1 (class=ocf provider=cloud type=DockerContainer)
  Attributes: image=fedora command=sleep 120 
  Meta Attrs: resource-stickiness=0 
  Operations: monitor interval=15s (app1-monitor-interval-15s)
              start interval=0 timeout=300 (app1-start-interval-0)
 Clone: app2-clone
  Meta Attrs: globally-unique=true clone-max=2 clone-node-max=1 
  Resource: app2 (class=ocf provider=cloud type=DockerContainer)
   Attributes: image=fedora command=sleep 120 
   Meta Attrs: resource-stickiness=0 
   Operations: monitor interval=15s (app2-monitor-interval-15s)
               start interval=0 timeout=300 (app2-start-interval-0)

# pcs resource update app2-clone meta clone-node-max=2
Traceback (most recent call last):
  File "/sbin/pcs", line 96, in <module>
    main(sys.argv[1:])
  File "/sbin/pcs", line 78, in main
    resource.resource_cmd(argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 77, in resource_cmd
    resource_update(res_id,argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 381, in resource_update
    return resource_clone_create([a.getAttribute("id")] + args, True)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 768, in resource_clone_create
    clone.removeChild(ma)
  File "/usr/lib64/python2.7/xml/dom/minidom.py", line 165, in removeChild
    raise xml.dom.NotFoundErr()
xml.dom.NotFoundErr

IndexError: list index out of range if corosync.service (systemd) file doesn't exists

Hi,

If the corosync.service systemd file doesn't exists, the pcs status command return a Python backtrace:

Cluster name: c001
WARNING: no stonith devices and stonith-enabled is not false
Last updated: Mon Nov  9 10:34:34 2015
Last change: Sun Nov  8 22:58:13 2015
Stack: corosync
Current DC: ctrl01 (1) - partition with quorum
Version: 1.1.12-561c4cf
2 Nodes configured
0 Resources configured


Online: [ ctrl01 ctrl02 ]

Full list of resources:


PCSD Status:
  ctrl01: Offline
  ctrl02: Offline

Daemon Status:
  corosync: unknown/Failed to get unit file state for corosync.service: No such file or directory
  pacemaker: unknown/
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 221, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/pcs", line 161, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/dist-packages/pcs/status.py", line 16, in status_cmd
    full_status()
  File "/usr/lib/python2.7/dist-packages/pcs/status.py", line 64, in full_status
    utils.serviceStatus("  ")
  File "/usr/lib/python2.7/dist-packages/pcs/utils.py", line 1979, in serviceStatus
    print prefix + daemons[i] + ": " + status[i] + "/" + enabled[i]
IndexError: list index out of range

When I create the file, I don't have this Python error anymore.

Cluster name: c001
Last updated: Mon Nov  9 10:43:35 2015
Last change: Mon Nov  9 10:35:10 2015
Stack: corosync
Current DC: ctrl01 (1) - partition with quorum
Version: 1.1.12-561c4cf
2 Nodes configured
0 Resources configured


Online: [ ctrl01 ctrl02 ]

Full list of resources:


PCSD Status:
  ctrl01: Offline
  ctrl02: Offline

Daemon Status:
  corosync: unknown/Failed to get unit file state for pacemaker.service: No such file or directory
  pacemaker: unknown/disabled
  pcsd: active/

I'm using Debian Jessie.

Thanks guys.

pcs cluster standby somehost

The pcs command does not error out when using the command 'pcs cluster standby somehost' even if 'somehost' is not a node in the cluster. The command should validate the node is in the cluster and if its not produce an error.

More of a question regarding order and colocations

How does one accomplish the following using pcs:

// Define resources that should always run on the same node; yet don't depend on
// each other in terms of state.
crm configure colocation RCS_COLOCATION INFINITY: ( RSC_1 RSC_2 RSC_3 ) DUMMY_RSC

// Define the order of resource operations; some can start in parallel.
crm configure order RCS_ORDER 0: ( RCS_1 RCS_2 RCS_3 ) DUMMY_RSC

Ubuntu: Failed to initialize the cmap API. Error CS_ERR_LIBRARY

Hi,

In my ubuntu15.10, following your git instruction and pacemaker 1.1 cluster from scratch, I failed while executing pcs cluster --debug auth ubuntu1 ubuntu2:

Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
  },
  "log": [
    "I, [2016-01-08T11:42:51.698700 #7280]  INFO -- : PCSD Debugging enabled\n",
    "D, [2016-01-08T11:42:51.698762 #7280] DEBUG -- : Did not detect RHEL 6\n",
    "I, [2016-01-08T11:42:51.698795 #7280]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name\n",
    "I, [2016-01-08T11:42:51.698821 #7280]  INFO -- : CIB USER: hacluster, groups: \n",
    "D, [2016-01-08T11:42:51.700530 #7280] DEBUG -- : []\n",
    "D, [2016-01-08T11:42:51.700574 #7280] DEBUG -- : [\"Failed to initialize the cmap API. Error CS_ERR_LIBRARY\\n\"]\n",
    "D, [2016-01-08T11:42:51.700620 #7280] DEBUG -- : Duration: 0.001705448s\n",
    "I, [2016-01-08T11:42:51.700679 #7280]  INFO -- : Return Value: 1\n",
    "W, [2016-01-08T11:42:51.701201 #7280]  WARN -- : Cannot read config 'tokens' from '/var/lib/pcsd/tokens': No such file or directory @ rb_sysopen - /var/lib/pcsd/tokens\n",
    "E, [2016-01-08T11:42:51.701258 #7280] ERROR -- : Unable to parse tokens file: A JSON text must at least contain two octets!\n"
  ]
}
--Debug Output End--

Sending HTTP Request to: https://ubuntu1:2224/remote/check_auth
Data: None
Response Reason: [Errno 111] Connection refused
Username: hacluster
Password: 
Running: /usr/bin/ruby -I/usr/share/pcsd/ /usr/share/pcsd/pcsd-cli.rb auth
--Debug Input Start--
{"username": "hacluster", "local": false, "nodes": ["ubuntu1", "ubuntu2"], "password": "MYPASSWD", "force": false}
--Debug Input End--
Return Value: 0
--Debug Output Start--
{
  "status": "ok",
  "data": {
    "auth_responses": {
      "ubuntu1": {
        "status": "noresponse"
      },
      "ubuntu2": {
        "status": "noresponse"
      }
    },
    "sync_successful": true,
    "sync_nodes_err": [

    ],
    "sync_responses": {
    }
  },
  "log": [
    "I, [2016-01-08T11:43:00.857726 #7289]  INFO -- : PCSD Debugging enabled\n",
    "D, [2016-01-08T11:43:00.857791 #7289] DEBUG -- : Did not detect RHEL 6\n",
    "I, [2016-01-08T11:43:00.857826 #7289]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name\n",
    "I, [2016-01-08T11:43:00.857852 #7289]  INFO -- : CIB USER: hacluster, groups: \n",
    "D, [2016-01-08T11:43:00.859573 #7289] DEBUG -- : []\n",
    "D, [2016-01-08T11:43:00.859614 #7289] DEBUG -- : [\"Failed to initialize the cmap API. Error CS_ERR_LIBRARY\\n\"]\n",
    "D, [2016-01-08T11:43:00.859657 #7289] DEBUG -- : Duration: 0.001717302s\n",
    "I, [2016-01-08T11:43:00.859714 #7289]  INFO -- : Return Value: 1\n",
    "W, [2016-01-08T11:43:00.860392 #7289]  WARN -- : Cannot read config 'tokens' from '/var/lib/pcsd/tokens': No such file or directory @ rb_sysopen - /var/lib/pcsd/tokens\n",
    "E, [2016-01-08T11:43:00.860453 #7289] ERROR -- : Unable to parse tokens file: A JSON text must at least contain two octets!\n",
    "I, [2016-01-08T11:43:00.860512 #7289]  INFO -- : SRWT Node: ubuntu2 Request: check_auth\n",
    "E, [2016-01-08T11:43:00.860551 #7289] ERROR -- : Unable to connect to node ubuntu2, no token available\n",
    "W, [2016-01-08T11:43:00.860639 #7289]  WARN -- : Cannot read config 'tokens' from '/var/lib/pcsd/tokens': No such file or directory @ rb_sysopen - /var/lib/pcsd/tokens\n",
    "E, [2016-01-08T11:43:00.860683 #7289] ERROR -- : Unable to parse tokens file: A JSON text must at least contain two octets!\n",
    "I, [2016-01-08T11:43:00.860707 #7289]  INFO -- : SRWT Node: ubuntu1 Request: check_auth\n",
    "E, [2016-01-08T11:43:00.860740 #7289] ERROR -- : Unable to connect to node ubuntu1, no token available\n",
    "I, [2016-01-08T11:43:00.862722 #7289]  INFO -- : No response from: ubuntu1 request: /auth, exception: Connection refused - connect(2) for \"ubuntu1\" port 2224\n",
    "I, [2016-01-08T11:43:00.862863 #7289]  INFO -- : No response from: ubuntu2 request: /auth, exception: Connection refused - connect(2) for \"ubuntu2\" port 2224\n"
  ]
}
--Debug Output End--

Error: Unable to communicate with ubuntu1
Error: Unable to communicate with ubuntu2

and I turned PCSD_DEBUG=true, systemctl restart pcsd and the log showed:

D, [2016-01-08T11:55:57.324636 #7422] DEBUG -- : Config files sync thread started
D, [2016-01-08T11:55:57.324736 #7422] DEBUG -- : Cannot read config '/var/lib/pcsd/cfgsync_ctl': No such file or directory @ rb_sysopen - /var/lib/pcsd/cfgsync_ctl
I, [2016-01-08T11:55:57.324796 #7422]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name
I, [2016-01-08T11:55:57.324838 #7422]  INFO -- : CIB USER: hacluster, groups: 
D, [2016-01-08T11:55:57.326625 #7422] DEBUG -- : []
D, [2016-01-08T11:55:57.326679 #7422] DEBUG -- : ["Failed to initialize the cmap API. Error CS_ERR_LIBRARY\n"]
D, [2016-01-08T11:55:57.326726 #7422] DEBUG -- : Duration: 0.0017842s
I, [2016-01-08T11:55:57.326796 #7422]  INFO -- : Return Value: 1
D, [2016-01-08T11:55:57.327093 #7422] DEBUG -- : Config files sync thread finished
D, [2016-01-08T11:55:57.327165 #7422] DEBUG -- : Cannot read config '/var/lib/pcsd/cfgsync_ctl': No such file or directory @ rb_sysopen - /var/lib/pcsd/cfgsync_ctl

I have no idea about other information to debug, please tell me if there are other way to test,
thank you very much!!

Jim

fixed paths and support of --prefix

Duplicate from pacemaker mailing list.

I checked the source code and found that the current version uses a mixture of fixed pathes for the binaries and calls without paths.
E.g. "cibadmin" or "/usr/sbin/crm_mon".
Do you have plans to support installations where the software is installed in /opt/ha?
This is the case when the option --prefix /opt/ha is used with configure of corosync/pacemaker etc.

In this case corosync.conf is installed in /opt/ha/etc/corosync/corosync.conf and the binaries in /opt/ha/bin/ or /opt/ha/sbin/ respective.

Andreas

pcs should allow storing the credentials token outside $HOME

Right now the pcs client stores it's credentials in whatever is contained in $HOME. If a user switch is performed and the $HOME variable is not updated, pcs will end up storing data in the wrong user's home directory. Or if $HOME is unset it will end up using /.pcs to store the token.

This can especially cause a problem with automation utilities which often wipe the environment.

I see 2 approaches to handling this.

  1. If the effective user id is 0, use an alternate path, such as /var/lib/pcs/token.
  2. Allow the user to override the path where the token is stored.
    This can somewhat be done now by doing HOME=/somwhere/else pcs foo bar, but it will still use /somewhere/else/.pcs as the path.
    It would be desirable to instead have something like PCS_DIR=/my/path/to/pcs pcs foo bar.

pcs ssh to pcsd timeout too long

i am testing pacemaker with sl linux 7.0. I create vmware vm, install sl 7.0 at vmware vm. and above the vmware vm I create linux kvm which is controlled by pacemaker.

the environment is unstable, vmware vm often hang. so I think it is a great environment for testing pacemaker.

I found pcsd cause trouble when one of the vmware vm is hang. (the hung vm will response ssh connect but with no further reply)

when I issue command "pcs cluster status", the pcs script try to ssh to hung vm and hang there for several minutes.

is there a proper way to disable pcsd completely under el 7.0? or maybe pcs should use ssh with a proper timeout value?

pcs doesn't check existence of OCF RA path, creating such resource leads to /usr/libexec/pacemaker/cib and /usr/libexec/pacemaker/crmd coredumps

I'm using pcs-0.9.26-10.0.1.el6.noarch from OracleLinux public repo distribution.
OL 6.4 x64_64.

How to reproduce (just run it and you will see the problem):
Create resource with non-existent incorrectly formed path (very specific) breaks pcs and cibadmin (we misspelled and used "::" instead of ":"
):
[code]
pcs resource create RepachedStats ocf::farpost::MemcachedStats hostname="memcache-active.localdomain.localhost" metric="memc.stats" lockfile="/var/run/memcached-stats.lock" port="11211" senderopts="-c /etc/zabbix/zabbix_agentd.conf" op monitor interval="30s"
[/code]

After running this command, there is no way to remove this resource with incorrect RA path, 'pcs resource delete' or 'cibadmin' constantly timed out ... Also I see coredumps in /var/lib/pacemaker/cores/
/usr/libexec/pacemaker/crmd
/usr/libexec/pacemaker/cib

Regards,
Artyom A. Konovalenko

pcsd on centos 6.5

I'm looking to install this on centos 6.5. I've gotten as far as trying to start the service which I then get

service pcsd start
Starting pcsd: /usr/local/rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in require': cannot load such file -- sinatra (LoadError) from /usr/local/rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:inrequire'
from /usr/lib/pcsd/pcsd.rb:1:in <top (required)>' from /usr/local/rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:inrequire'
from /usr/local/rvm/rubies/ruby-1.9.3-p547/lib/ruby/site_ruby/1.9.1/rubygems/core_ext/kernel_require.rb:55:in require' from /usr/lib/pcsd/ssl.rb:65:in

'
[FAILED]

I'll play around with ruby and rvm to see if I can tweak this but wanted to see if it's possible.

debian squeeze

I can not compile it on debian.

chmod: cannot access `/usr/lib/python2.6/dist-packages/pcs/pcs.py': No such file or directory

cfgsync_ctl and pcsd/pcs_settings.conf are note created

Hi guys,

pcsd complains about cfgsync_ctl and pcs_settings.conf missing files.

D, [2016-02-20T15:01:55.259320 #18793] DEBUG -- : Config files sync thread started
D, [2016-02-20T15:01:55.259562 #18793] DEBUG -- : Cannot read config '/var/lib/pcsd/cfgsync_ctl': No such file or directory - /var/lib/pcsd/cfgsync_ctl
I, [2016-02-20T15:01:55.259656 #18793]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name
I, [2016-02-20T15:01:55.259725 #18793]  INFO -- : CIB USER: hacluster, groups: 
D, [2016-02-20T15:01:55.266683 #18793] DEBUG -- : ["totem.cluster_name (str) = uoi\n"]
D, [2016-02-20T15:01:55.266785 #18793] DEBUG -- : Duration: 0.006955592s
I, [2016-02-20T15:01:55.266857 #18793]  INFO -- : Return Value: 0
D, [2016-02-20T15:01:55.266914 #18793] DEBUG -- : Config files sync thread fetching
I, [2016-02-20T15:01:55.266964 #18793]  INFO -- : Running: /usr/sbin/pcs status nodes corosync
I, [2016-02-20T15:01:55.267004 #18793]  INFO -- : CIB USER: hacluster, groups: 
D, [2016-02-20T15:01:55.411948 #18793] DEBUG -- : ["Corosync Nodes:\n", " Online: ctrl01.uoi.io ctrl02.uoi.io ctrl03.uoi.io \n", " Offline: \n"]
D, [2016-02-20T15:01:55.412113 #18793] DEBUG -- : Duration: 0.144924801s
I, [2016-02-20T15:01:55.412201 #18793]  INFO -- : Return Value: 0
D, [2016-02-20T15:01:55.412310 #18793] DEBUG -- : Fetching configs from the cluster
I, [2016-02-20T15:01:55.412793 #18793]  INFO -- : SRWT Node: ctrl03.uoi.io Request: get_configs
I, [2016-02-20T15:01:55.412634 #18793]  INFO -- : SRWT Node: ctrl01.uoi.io Request: get_configs
I, [2016-02-20T15:01:55.414021 #18793]  INFO -- : SRWT Node: ctrl02.uoi.io Request: get_configs
I, [2016-02-20T15:01:55.462761 #18793]  INFO -- : Running: /usr/sbin/corosync-cmapctl totem.cluster_name
I, [2016-02-20T15:01:55.462836 #18793]  INFO -- : CIB USER: hacluster, groups: 
D, [2016-02-20T15:01:55.470066 #18793] DEBUG -- : ["totem.cluster_name (str) = uoi\n"]
D, [2016-02-20T15:01:55.470219 #18793] DEBUG -- : Duration: 0.00721271s
I, [2016-02-20T15:01:55.470329 #18793]  INFO -- : Return Value: 0
W, [2016-02-20T15:01:55.470680 #18793]  WARN -- : Cannot read config 'pcs_settings.conf' from '/var/lib/pcsd/pcs_settings.conf': No such file or directory - /var/lib/pcsd/pcs_settings.conf
D, [2016-02-20T15:01:55.470786 #18793] DEBUG -- : permission check action=full username=hacluster groups=[]
D, [2016-02-20T15:01:55.470829 #18793] DEBUG -- : permission granted for superuser
W, [2016-02-20T15:01:55.470930 #18793]  WARN -- : Cannot read config 'pcs_settings.conf' from '/var/lib/pcsd/pcs_settings.conf': No such file or directory - /var/lib/pcsd/pcs_settings.conf
::ffff:10.0.0.107 - - [20/Feb/2016:15:01:55 -0500] "GET /remote/get_configs HTTP/1.1" 200 357 0.0097
::ffff:10.0.0.107 - - [20/Feb/2016:15:01:55 -0500] "GET /remote/get_configs HTTP/1.1" 200 357 0.0099
ctrl01.uoi.io - - [20/Feb/2016:15:01:55 EST] "GET /remote/get_configs HTTP/1.1" 200 357
- -> /remote/get_configs
E, [2016-02-20T15:01:55.522601 #18793] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
E, [2016-02-20T15:01:55.522693 #18793] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
W, [2016-02-20T15:01:55.522838 #18793]  WARN -- : Cannot read config 'pcs_settings.conf' from '/var/lib/pcsd/pcs_settings.conf': No such file or directory - /var/lib/pcsd/pcs_settings.conf
E, [2016-02-20T15:01:55.523006 #18793] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
D, [2016-02-20T15:01:55.523107 #18793] DEBUG -- : Config files sync thread finished
D, [2016-02-20T15:01:55.523189 #18793] DEBUG -- : Cannot read config '/var/lib/pcsd/cfgsync_ctl': No such file or directory - /var/lib/pcsd/cfgsync_ctl

My cluster is running well but even if I create those files manually and restart pcsd I still having error messages in /var/log/pcsd/pcsd.log:

::ffff:10.0.0.107 - - [20/Feb/2016:15:10:12 -0500] "GET /remote/get_configs HTTP/1.1" 200 403 0.0135
::ffff:10.0.0.107 - - [20/Feb/2016:15:10:12 -0500] "GET /remote/get_configs HTTP/1.1" 200 403 0.0137
ctrl01.uoi.io - - [20/Feb/2016:15:10:12 EST] "GET /remote/get_configs HTTP/1.1" 200 403
- -> /remote/get_configs
E, [2016-02-20T15:10:12.469539 #19074] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
E, [2016-02-20T15:10:12.469680 #19074] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
E, [2016-02-20T15:10:12.469774 #19074] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
E, [2016-02-20T15:10:12.470094 #19074] ERROR -- : Unable to parse pcs_settings file: A JSON text must at least contain two octets!
D, [2016-02-20T15:10:12.470193 #19074] DEBUG -- : Config files sync thread finished
D, [2016-02-20T15:10:12.470328 #19074] DEBUG -- : Cannot read config '/var/lib/pcsd/cfgsync_ctl': A JSON text must at least contain two octets!

It's like I am unauthorized:

ctrl01.uoi.io - - [20/Feb/2016:15:10:12 EST] "GET /remote/get_configs HTTP/1.1" 200 403

Any idea ?

Thanks.

utils.py : stonithCheck

The following message is issued even though stonith devices are defined. Not sure if this has other negative impacts at this time.

WARNING: no stonith devices and stonith-enabled is not false

stonithCheck limits its search:

primitives = et.findall("configuration/resources/primitive")
for p in primitives:
if p.attrib["class"] == "stonith":
return False

But it's possible to have a primitive resource nested inside of a clone.

Issue observed while installing pcs from source on x86_64 Ubuntu 14.04

Hi, I installed the pcs from source on x86_64 Ubuntu 14.04.
Following are the steps that I follow to install pcsd:
$ git clone https://github.com/feist/pcs.git
$ cd pcs/ && make install
$ cd pcsd/ && make get_gems
$ cd ../ && make install pcsd.

But while running the pcs, I am getting the following error:

$ pcs --debug cluster auth ubuntu
Username: hacluster
Password:
Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb read_tokens
--Debug Input Start--
{}
--Debug Input End--

Return Value: 1
--Debug Output Start--
/usr/bin/ruby: No such file or directory -- /usr/lib/pcsd/pcsd-cli.rb (LoadError)

--Debug Output End--

Sending HTTP Request to: https://localhost:2224/run_pcs
Data: command=%5B%22--debug%22%2C+%22cluster%22%2C+%22auth%22%2C+%22ubuntu%22%2C+%22-u%22%2C+%22hacluster%22%2C+%22-p%22%2C+%22root%22%5D
Response Reason: [Errno 111] Connection refused
Error: Unable to connect to localhost ([Errno 111] Connection refused)

The reason for the above LoadError observed is, the file is installed in /usr/share/pcsd dir but it is expected in /usr/lib/pcsd.
Kindly provide me pointers towards resolving the issue, if any.

Removing a single ressource from a group also removes all the constraints mentioning that group

When removing a single resource from a group, without deleting the group, all the constraint mentioning the group are removed too.

Example:

# pcs -f group_remove_bug.xml resource group remove g_httpd r_nginx
Removing Constraint - colocation-g_httpd-cl_DRBD_httpd-INFINITY
Removing Constraint - location-g_httpd-vm-nmb-lamp-01-50
Removing Constraint - order-cl_DRBD_httpd-g_httpd-mandatory

order and colocation of group/clone resources

We should be able to build constraints between all resource types, group, clone, multistate, and primitive.

It looks like we can only do this for primitives at the moment.

  1. make 2 groups.
    pcs resource create Q ocf:pacemaker:Dummy op monitor interval=10s
    pcs resource create R ocf:pacemaker:Dummy op monitor interval=10s
    pcs resource create S ocf:pacemaker:Dummy op monitor interval=10s
    pcs resource group add GROUP1 Q R S

pcs resource create T ocf:pacemaker:Dummy op monitor interval=10s
pcs resource create U ocf:pacemaker:Dummy op monitor interval=10s
pcs resource create V ocf:pacemaker:Dummy op monitor interval=10s
pcs resource group add GROUP2 T U V

  1. building constraints between them does not work
    pcs order add GROUP1 GROUP2
    Error: Resource 'GROUP1' does not exist

looking at pcs/utils.py we are only looking for primitives in "does_resource_exist()" function, which is what the order constraint checks.

ocf_log causes meta-data parsing to fail

When a resource agent uses ocf_log, it causes the meta-data operation to fail.

For example, when I modify the pacemaker/Dummy resource and add an ocf_debug line as such:

meta_data() {
ocf_log debug foo
        cat <<END
<?xml version="1.0"?>
<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
<resource-agent name="Dummy" version="1.0">
<version>1.0</version>

I get

# pcs resource describe ocf:pacemaker:Dummy
Resource options for: Dummy
Error: Unable to parse xml for: Dummy

This should not cause an error. ocf_log output is sent to STDERR, PCS should only be reading xml from STDOUT.

Tested on Fedora 20 with PCS 0.9.44

Attempting to use an operation timeout without passing an interval generates xml schema error

When trying to set an operation timeout (op start timeout=30s), if the interval parameter is not passed, an XML schema error is returned. If interval is not passed it should default to 0

# pcs resource create test2 ocf:pacemaker:Dummy op start timeout=30s
Error: Unable to create resource/fence device
Call cib_create failed (-203): Update does not conform to the configured schema
<cib epoch="64" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" cib-last-written="Wed Jun  4 02:22:19 2014" update-origin="i-053f1f59" update-client="cibadmin" crm_feature_set="3.0.8" have-quorum="1" dc-uuid="3">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-1.fc20-9d39a6b"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-last-lrm-refresh" name="last-lrm-refresh" value="1401848085"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="i-053f1f59"/>
      <node id="2" uname="i-093f1f55">
        <instance_attributes id="nodes-2">
          <nvpair id="nodes-2-standby" name="standby" value="on"/>
        </instance_attributes>
      </node>
      <node id="3" uname="i-083f1f54">
        <instance_attributes id="nodes-3">
          <nvpair id="nodes-3-standby" name="standby" value="on"/>
        </instance_attributes>
      </node>
    </nodes>
    <resources>
      <primitive class="ocf" id="app1" provider="cloud" type="DockerContainer">
        <instance_attributes id="app1-instance_attributes">
          <nvpair id="app1-instance_attributes-image" name="image" value="fedora"/>
          <nvpair id="app1-instance_attributes-command" name="command" value="sh -c &apos;while true; do curl -X POST -d &quot;$(date) $(env)&quot; http://requestb.in/17b7m861; sleep 30; done&apos;"/>
        </instance_attributes>
        <operations>
          <op id="app1-monitor-interval-1m" interval="1m" name="monitor"/>
          <op id="app1-start-interval-0" interval="0" name="start" timeout="300"/>
        </operations>
        <meta_attributes id="app1-meta_attributes"/>
      </primitive>
      <primitive class="ocf" id="test1" provider="pacemaker" type="Dummy">
        <instance_attributes id="test1-instance_attributes">
          <nvpair id="test1-instance_attributes-fake" name="fake" value="hello world"/>
        </instance_attributes>
      </primitive>
      <primitive class="ocf" id="test2" provider="pacemaker" type="Dummy">
        <instance_attributes id="test2-instance_attributes"/>
        <operations>
          <op id="test2-start-timeout-30s" name="start" timeout="30s"/>
        </operations>
      </primitive>
    </resources>
    <constraints/>
  </configuration>
  <status>
    <node_state id="1" uname="i-053f1f59" in_ccm="true" crmd="online" join="member" expected="member" crm-debug-origin="do_update_resource">
      <transient_attributes id="1">
        <instance_attributes id="status-1">
          <nvpair id="status-1-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/>
          <nvpair id="status-1-last-failure-app1" name="last-failure-app1" value="1401847926"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="1">
        <lrm_resources>
          <lrm_resource id="app1" type="DockerContainer" class="ocf" provider="cloud">
            <lrm_rsc_op id="app1_last_failure_0" operation_key="app1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="4:187:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:0;4:187:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="311" rc-code="0" op-status="0" interval="0" last-run="1401848086" last-rc-change="1401848086" exec-time="153" queue-time="0" op-digest="6d9e7e9ff44d3b3234dae9874cfccd85"/>
            <lrm_rsc_op id="app1_monitor_60000" operation_key="app1_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="10:188:0:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:0;10:188:0:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="312" rc-code="0" op-status="0" interval="60000" last-rc-change="1401848086" exec-time="69" queue-time="0" op-digest="def3bbf9787113e013d0b058834c9c33"/>
          </lrm_resource>
          <lrm_resource id="test1" type="Dummy" class="ocf" provider="pacemaker">
            <lrm_rsc_op id="test1_last_0" operation_key="test1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="12:189:0:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:0;12:189:0:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="317" rc-code="0" op-status="0" interval="0" last-run="1401848384" last-rc-change="1401848384" exec-time="31" queue-time="0" op-digest="460b154c607938c6020b149b217fee26" op-force-restart=" state  op_sleep " op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
    <node_state id="2" uname="i-093f1f55" crmd="online" crm-debug-origin="do_update_resource" in_ccm="true" join="member" expected="member">
      <transient_attributes id="2">
        <instance_attributes id="status-2">
          <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/>
          <nvpair id="status-2-shutdown" name="shutdown" value="0"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="2">
        <lrm_resources>
          <lrm_resource id="app1" type="DockerContainer" class="ocf" provider="cloud">
            <lrm_rsc_op id="app1_last_0" operation_key="app1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:188:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:7;7:188:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="105" rc-code="7" op-status="0" interval="0" last-run="1401848086" last-rc-change="1401848086" exec-time="26" queue-time="1" op-digest="6d9e7e9ff44d3b3234dae9874cfccd85"/>
          </lrm_resource>
          <lrm_resource id="test1" type="Dummy" class="ocf" provider="pacemaker">
            <lrm_rsc_op id="test1_last_0" operation_key="test1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="9:189:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:7;9:189:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="109" rc-code="7" op-status="0" interval="0" last-run="1401848384" last-rc-change="1401848384" exec-time="28" queue-time="0" op-digest="460b154c607938c6020b149b217fee26" op-force-restart=" state  op_sleep " op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
    <node_state id="3" uname="i-083f1f54" crmd="online" crm-debug-origin="do_update_resource" in_ccm="true" join="member" expected="member">
      <transient_attributes id="3">
        <instance_attributes id="status-3">
          <nvpair id="status-3-probe_complete" name="probe_complete" value="true"/>
          <nvpair id="status-3-shutdown" name="shutdown" value="0"/>
        </instance_attributes>
      </transient_attributes>
      <lrm id="3">
        <lrm_resources>
          <lrm_resource id="app1" type="DockerContainer" class="ocf" provider="cloud">
            <lrm_rsc_op id="app1_last_0" operation_key="app1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="5:188:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:7;5:188:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="105" rc-code="7" op-status="0" interval="0" last-run="1401848085" last-rc-change="1401848085" exec-time="47" queue-time="0" op-digest="6d9e7e9ff44d3b3234dae9874cfccd85"/>
          </lrm_resource>
          <lrm_resource id="test1" type="Dummy" class="ocf" provider="pacemaker">
            <lrm_rsc_op id="test1_last_0" operation_key="test1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.8" transition-key="7:189:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" transition-magic="0:7;7:189:7:95bbffb5-41cc-447d-acc1-8e28c5f94777" call-id="109" rc-code="7" op-status="0" interval="0" last-run="1401848383" last-rc-change="1401848383" exec-time="46" queue-time="0" op-digest="460b154c607938c6020b149b217fee26" op-force-restart=" state  op_sleep " op-restart-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
          </lrm_resource>
        </lrm_resources>
      </lrm>
    </node_state>
  </status>
</cib>
# pcs resource create test2 ocf:pacemaker:Dummy op start timeout=30s interval=0
# pcs resource show test2      
 Resource: test2 (class=ocf provider=pacemaker type=Dummy)
  Operations: start interval=0 timeout=30s (test2-start-interval-0)

CentOS 6.6 pcs status python NameError

Hi,
First of all sorry if it's not appropriate resource for reporting this issue but still.
Just updated to CentOS 6.6x64 and begun setting up cluster with pacemaker-cman.
It comes with pcs 0.9.123-9.el6.centos

pcs status throwing next error:

pcs status
[OUTPUT OMITTED]
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 138, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/pcs", line 127, in main
    status.status_cmd(argv)
  File "/usr/lib/python2.6/site-packages/pcs/status.py", line 13, in status_cmd
    full_status()
  File "/usr/lib/python2.6/site-packages/pcs/status.py", line 60, in full_status
    utils.serviceStatus("  ")
  File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 1504, in serviceStatus
    if is_systemctl():
  File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 1476, in is_systemctl
    elif re.search(r'Foobar Linux release 6\.', issue):
NameError: global name 'issue' is not defined

I saw this piece of code kinda useless :\

def is_systemctl():
    if os.path.exists('/usr/bin/systemctl'):
        return True
    elif re.search(r'Foobar Linux release 6\.', issue):
        return True
    elif re.search(r'CentOS release 6\.', issue):
        return True
    else:
        return False

and rewrote it like this:

def is_systemctl():
    if os.path.exists('/usr/bin/systemctl'):
        return True
    else:
        return False

Hope nothing broken with this edit :\

pcs depends on systemctl which is not available in centos

root@cluster1:# pcs cluster start
Starting Cluster... Unable to locate command: systemctl
root@cluster1:
# uname -a
Linux cluster1 2.6.32-279.el6.i686 #1 SMP Fri Jun 22 10:59:55 UTC 2012 i686 i686 i386 GNU/Linux
root@cluster1:~# cat /etc/*release
CentOS release 6.3 (Final)

Resource ordering within a group

Is this something that is supported? Say I have two resource per group, is it possible to order the resources within the group but also set a dependency to say resource group A cant start until resource group B starts?

The pcs app allows the commands to be setup, but what we are finding in reality this does not work.

Unable to delete remote node, Pacemaker still having some traces

Hi guys,

After removing a remote node from Pacemaker via the pcs command I unable to add this node again.

Adding the node:

# pcs resource create cmpt01.uoi.io ocf:pacemaker:remote

The node appears as a remote node.

Cluster name: uoi
Last updated: Thu Feb 18 15:27:13 2016      Last change: Thu Feb 18 15:27:11 2016 by root via cibadmin on ctrl01.uoi.io
Stack: corosync
Current DC: ctrl01.uoi.io (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
4 nodes and 1 resource configured

Online: [ ctrl01.uoi.io ctrl02.uoi.io ctrl03.uoi.io ]
RemoteOnline: [ cmpt01.uoi.io ]

Full list of resources:

 cmpt01.uoi.io  (ocf::pacemaker:remote):    Started ctrl01.uoi.io

PCSD Status:
  ctrl01.uoi.io: Online
  ctrl02.uoi.io: Online
  ctrl03.uoi.io: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Delete the node.

# pcs resource delete cmpt01.uoi.io
Attempting to stop: cmpt01.uoi.io...Stopped

The delete has been performed if we trust pcs status.

Cluster name: uoi
Last updated: Thu Feb 18 15:30:03 2016      Last change: Thu Feb 18 15:29:47 2016 by root via cibadmin on ctrl01.uoi.io
Stack: corosync
Current DC: ctrl01.uoi.io (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
3 nodes and 0 resources configured

Online: [ ctrl01.uoi.io ctrl02.uoi.io ctrl03.uoi.io ]

Full list of resources:


PCSD Status:
  ctrl01.uoi.io: Online
  ctrl02.uoi.io: Online
  ctrl03.uoi.io: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

If I want to add back the same remote node I got this error:

# pcs resource create cmpt01.uoi.io ocf:pacemaker:remote
Error: unable to create resource/fence device 'cmpt01.uoi.io', 'cmpt01.uoi.io' already exists on this system

Via the /usr/sbin/cibadmin -l -Q command I see some traces of the node (at the end of the ouput).

<cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="31" num_updates="3" admin_epoch="0" cib-last-written="Thu Feb 18 15:29:47 2016" update-origin="ctrl01.uoi.io" update-client="cibadmin" update-user="root" have-quorum="1" dc-uuid="1">
  <configuration>
    <crm_config>
      <cluster_property_set id="cib-bootstrap-options">
        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/>
        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/>
        <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/>
        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.13-10.el7_2.2-44eb2dd"/>
        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="uoi"/>
      </cluster_property_set>
    </crm_config>
    <nodes>
      <node id="1" uname="ctrl01.uoi.io"/>
      <node id="2" uname="ctrl02.uoi.io"/>
      <node id="3" uname="ctrl03.uoi.io"/>
    </nodes>
    <resources/>
    <constraints/>
  </configuration>
  <status>
    <node_state id="2" uname="ctrl02.uoi.io" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
      <lrm id="2">
        <lrm_resources/>
      </lrm>
      <transient_attributes id="2">
        <instance_attributes id="status-2">
          <nvpair id="status-2-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/>
        </instance_attributes>
      </transient_attributes>
    </node_state>
    <node_state id="1" uname="ctrl01.uoi.io" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
      <lrm id="1">
        <lrm_resources/>
      </lrm>
      <transient_attributes id="1">
        <instance_attributes id="status-1">
          <nvpair id="status-1-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/>
        </instance_attributes>
      </transient_attributes>
    </node_state>
    <node_state id="3" uname="ctrl03.uoi.io" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
      <lrm id="3">
        <lrm_resources/>
      </lrm>
      <transient_attributes id="3">
        <instance_attributes id="status-3">
          <nvpair id="status-3-shutdown" name="shutdown" value="0"/>
          <nvpair id="status-3-probe_complete" name="probe_complete" value="true"/>
        </instance_attributes>
      </transient_attributes>
    </node_state>
    <node_state id="cmpt01.uoi.io">
      <transient_attributes id="cmpt01.uoi.io">
        <instance_attributes id="status-cmpt01.uoi.io"/>
      </transient_attributes>
    </node_state>
  </status>
</cib>

The only way to add back the remote node is to restart Pacemaker.

Any idea ?

Thanks

PCS only looks in the installed packages path (/usr/lib/systemd/system) for unit files, ignoring local-only units in /etc/systemd/system

From the systemd man pages (man systemd.unit):

Unit File Load Path

Unit files are loaded from a set of paths determined during compilation, described in the two tables below. Unit files found in directories listed earlier override files with the same name in directories lower in the list.

When the variable $SYSTEMD_UNIT_PATH is set, the contents of this variable overrides the unit load path. If $SYSTEMD_UNIT_PATH ends with an empty component (":"), the usual unit load path will be appended to the contents of the variable.

Table 1. Load path when running in system mode (--system).

Path Description
/etc/systemd/system Local configuration
/run/systemd/system Runtime units
/usr/lib/systemd/system Units of installed packages

pcs cluster pacemaker remove is not functional

It looks like this functional is just missed from cluster.py but pcs advertises it as possible action when we do: pcs cluster -h

I guess it should do: crm_node --force --remove ? It will be helpful to incorporate this to pcs, so we can use 1 tool instead of N.

Thanks!

Error: unable to get current list of providers

I'm using pcs-0.9.27-1.fc17.x86_64 from http://people.redhat.com/cfeist/pcs on a Fedora17-x86_64 2 node cluster. When I run "pcs resource providers" I see the following bizarre output:

Error: unable to get current list of providers
heartbeat
linbit
pacemaker
redhat

heartbeat
linbit
pacemaker
redhat

So it claims it can't get the list, and then prints the expected list twice. I'm assuming this is some weird cosmetic bug?

pcs status on CentOS 6.x with cman (Error: no nodes found in corosync.conf)

I think this part (and maybe more) not working correctly on CentOS 6.X:

if not utils.is_rhel6():
    print "PCSD Status:"
    cluster.cluster_gui_status([],True)
    print ""

Because of is_rhel6():
if re.search(r'Red Hat Enterprise Linux Server release 6.', issue)

On CentOS output is:

cat /etc/system-release
CentOS release 6.4 (Final)

ruby-mri only binding to IPV6 interfaces

If you try to 'auth' to IPv4 address you can't, but IPv6 addresses are fine. See below.


$ pcs cluster auth  127.0.0.1
Error: Unable to communicate with 127.0.0.1

# IPV6 localhost is fine
#
$ pcs cluster auth  ::1
Username: 

# Observation: ruby-mri is only listening on tcp6 not tcp.
#
$ netstat -anp | grep 2224
tcp6       0      0 :::2224                 :::*                    LISTEN      744/ruby-mri

Possible Fix

After making the following change I can now bind via either IPv4 or IPv6 addresses.

$ diff /usr/lib/pcsd/ssl.rb.bak /usr/lib/pcsd/ssl.rb
31c31
<   :BindAddress        => "::",

---
>   :BindAddress        => "*",

$ systemctl restart pcsd.service

pcs status returns IndexError: list index out of range

I updated pcs, and now pcs doesn't work for me:

[root@g5se-f3efce log]# pcs status
Cluster name: cl-g5se-f3efce
Traceback (most recent call last):
File "/usr/sbin/pcs", line 155, in
main(sys.argv[1:])
File "/usr/sbin/pcs", line 147, in main
status.status_cmd(argv)
File "/usr/lib/python2.6/site-packages/pcs/status.py", line 13, in status_cmd
full_status()
File "/usr/lib/python2.6/site-packages/pcs/status.py", line 53, in full_status
if utils.corosyncPacemakerNodeCheck():
File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 1975, in corosyncPacemakerNodeCheck
pm_nodes = getPacemakerNodesID()
File "/usr/lib/python2.6/site-packages/pcs/utils.py", line 1970, in getPacemakerNodesID
pm_nodes[node_info[0]] = node_info[1]
IndexError: list index out of range

This is what I have installed:

[root@g5se-f3efce log]# yum info pcs
Installed Packages
Name : pcs
Arch : x86_64
Version : 0.9.139
Release : 9.el6_7.2
Size : 14 M
Repo : installed
From repo : corespace-centos-6-updates-x86_64
Summary : Pacemaker Configuration System
URL : http://github.com/feist/pcs
License : GPLv2
Description : pcs is a corosync and pacemaker configuration tool. It permits users to
: easily view, modify and created pacemaker based clusters.

I erased this pcs version and reinstalled an earlier version, and it worked. See below:

[root@g5se-f3efce Packages]# pcs status
Cluster name: cl-g5se-f3efce
Last updated: Thu Feb 18 14:41:34 2016
Last change: Thu Feb 18 11:46:49 2016 via crm_resource on g5se-f3efce
Stack: cman
Current DC: g5se-f3efce - partition with quorum
Version: 1.1.11-97629de
1 Nodes configured
4 Resources configured

Online: [ g5se-f3efce ]

Full list of resources:

sw-ready-g5se-f3efce (ocf::pacemaker:GBmon): Started g5se-f3efce
meta-data (ocf::pacemaker:GBmon): Started g5se-f3efce
netmon (ocf::heartbeat:ethmonitor): Started g5se-f3efce
ClusterIP (ocf::heartbeat:IPaddr2): Stopped

[root@g5se-f3efce Packages]# yum info pcs
Installed Packages
Name : pcs
Arch : noarch
Version : 0.9.90
Release : 1.0.1.el6.centos
Size : 457 k
Repo : installed
From repo : /pcs-0.9.90-1.0.1.el6.centos.noarch
Summary : Pacemaker Configuration System
URL : http://github.com/feist/pcs
License : GPLv2
Description : pcs is a corosync and pacemaker configuration tool. It permits users to
: easily view, modify and created pacemaker based clusters.

More options to configure.

Hello everybody.
Version pcs 0.9.143.on SL7
We need a capability to bind on particular IPs. But there is no such option except direct edit of ssl.rb, what is a bad way because it will be overridden after first update.
Why even such things are hardcoded?

Does not support utilization parameters

Pacemaker has support for utilization attributes as described in Chapter 11 of Configuration Explained

It would be nice if PCS had the ability to set these attributes so that we would not have to configure them manually.

For example, if I want to set the cpu utilization attribute, I currently have to do the following:

pcs resource create dummy1 ocf:pacemaker:Dummy meta target-role=stopped
crm_resource -r dummy1 -z -p cpu -v 1000
pcs resource start dummy1

bash completion file missing from pcs directory

When trying to 'make install' the latest 0.9.59 version of pcs, I get the error "install: cannot stat `pcs/bash_completion.d.pcs': No such file or directory".
This is indeed the case that the Makefile is trying to install a file that does not exist.

newlines stripped from attribute values

When trying to use a resource attribute with a newline character in it, the newline is converted into a space.

# pcs resource create test1 ocf:pacemaker:Dummy fake=hello$'\n'world
# cibadmin -Q | grep world
          <nvpair id="test1-instance_attributes-fake" name="fake" value="hello world"/>

pcs cluster status != pcs status cluster

Seeing the following behavior with 0.9.115 on a CMAN/Corosync/Pacemaker cluster on RHEL64 (using the pcs release downloaded from github).

pcs status cluster behaves as expected, but pcs cluster status also includes a warning about pcsd not running (the help page suggests the command is an alias of pcs status cluster so I would have expected it to behave the same in both cases?).

[root@node0 pcs-0.9.115]# pcs status cluster
Cluster Status:
 Last updated: Mon Mar 10 10:28:49 2014
 Last change: Mon Mar 10 10:25:42 2014 via crm_attribute on node0-priv
 Stack: cman
 Current DC: node0-priv - partition with quorum
 Version: 1.1.10-1.el6_4.4-368c726
 2 Nodes configured
 12 Resources configured

versus

[root@node0 pcs-0.9.115]# pcs cluster status
Cluster Status:
 Last updated: Mon Mar 10 10:30:09 2014
 Last change: Mon Mar 10 10:25:42 2014 via crm_attribute on node0-priv
 Stack: cman
 Current DC: node0-priv - partition with quorum
 Version: 1.1.10-1.el6_4.4-368c726
 2 Nodes configured
 12 Resources configured

PCSD Status:
Error: no nodes found in corosync.conf

Thanks!

on rhel 6.5 "cluster status" does not show any nodes

/etc/corosync/corosync.conf file does not exist, confguration is in /etc/cluster/cluster.conf

[root@node1~]# pcs cluster status
Cluster Status:
Last updated: Fri Jun 6 12:00:47 2014
Last change: Fri Jun 6 11:39:56 2014 via cibadmin on node1
Stack: cman
Current DC: node1 - partition with quorum
Version: 1.1.10-14.el6-368c726
3 Nodes configured
2 Resources configured

PCSD Status:
Error: no nodes found in corosync.conf

[root@node1~]# cat /etc/system-release
Oracle Linux Server release 6.5

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.