A state module to manage Pacemaker/Corosync clusters with the Pacemaker/Corosync configuration system (PCS)
New in version 2016.110.
| depends: | pcs |
|---|
Walkthrough of a complete PCS cluster setup: http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/
At first the cibfile must be created:
mysql_pcs__cib_present_cib_for_galera:
pcs.cib_present:
- cibname: cib_for_galera
- scope: None
- extra_args: None
Then the cibfile can be modified by creating resources (creating only 1 resource for demonstration, see also 7.):
mysql_pcs__resource_present_galera:
pcs.resource_present:
- resource_id: galera
- resource_type: "ocf:heartbeat:galera"
- resource_options:
- 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
- '--master'
- cibname: cib_for_galera
After modifying the cibfile, it can be pushed to the live CIB in the cluster:
mysql_pcs__cib_pushed_cib_for_galera:
pcs.cib_pushed:
- cibname: cib_for_galera
- scope: None
- extra_args: None
Create a cluster from scratch:
it rolls out a default cluster that needs to be destroyed before the new cluster can be created. This is a little complicated so it's best to just run the cluster_setup below in most cases.:
pcs_auth__auth:
pcs.auth:
- nodes:
- node1.example.com
- node2.example.com
- pcsuser: hacluster
- pcspasswd: hoonetorg
Do the initial cluster setup:
pcs_setup__setup:
pcs.cluster_setup:
- nodes:
- node1.example.com
- node2.example.com
- pcsclustername: pcscluster
- extra_args:
- '--start'
- '--enable'
- pcsuser: hacluster
- pcspasswd: hoonetorg
Optional: Set cluster properties:
pcs_properties__prop_has_value_no-quorum-policy:
pcs.prop_has_value:
- prop: no-quorum-policy
- value: ignore
- cibname: cib_for_cluster_settings
Optional: Set resource defaults:
pcs_properties__resource_defaults_to_resource-stickiness:
pcs.resource_defaults_to:
- default: resource-stickiness
- value: 100
- cibname: cib_for_cluster_settings
Optional: Set resource op defaults:
pcs_properties__resource_op_defaults_to_monitor-interval:
pcs.resource_op_defaults_to:
- op_default: monitor-interval
- value: 60s
- cibname: cib_for_cluster_settings
Configure Fencing (!is often not optional on production ready cluster!):
pcs_stonith__created_eps_fence:
pcs.stonith_present:
- stonith_id: eps_fence
- stonith_device_type: fence_eps
- stonith_device_options:
- 'pcmk_host_map=node1.example.org:01;node2.example.org:02'
- 'ipaddr=myepsdevice.example.org'
- 'power_wait=5'
- 'verbose=1'
- 'debug=/var/log/pcsd/eps_fence.log'
- 'login=hidden'
- 'passwd=hoonetorg'
- cibname: cib_for_stonith
Add resources to your cluster:
mysql_pcs__resource_present_galera:
pcs.resource_present:
- resource_id: galera
- resource_type: "ocf:heartbeat:galera"
- resource_options:
- 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
- '--master'
- cibname: cib_for_galera
Optional: Add constraints (locations, colocations, orders):
haproxy_pcs__constraint_present_colocation-vip_galera-haproxy-clone-INFINITY:
pcs.constraint_present:
- constraint_id: colocation-vip_galera-haproxy-clone-INFINITY
- constraint_type: colocation
- constraint_options:
- 'add'
- 'vip_galera'
- 'with'
- 'haproxy-clone'
- cibname: cib_for_haproxy
New in version 2016.3.0.
salt.states.pcs.auth(name, nodes, pcsuser='hacluster', pcspasswd='hacluster', extra_args=None)¶Ensure all nodes are authorized to the cluster
Example:
pcs_auth__auth:
pcs.auth:
- nodes:
- node1.example.com
- node2.example.com
- pcsuser: hacluster
- pcspasswd: hoonetorg
- extra_args: []
salt.states.pcs.cib_present(name, cibname, scope=None, extra_args=None)¶Ensure that a CIB-file with the content of the current live CIB is created
Should be run on one cluster node only (there may be races)
Example:
mysql_pcs__cib_present_cib_for_galera:
pcs.cib_present:
- cibname: cib_for_galera
- scope: None
- extra_args: None
salt.states.pcs.cib_pushed(name, cibname, scope=None, extra_args=None)¶Ensure that a CIB-file is pushed if it is changed since the creation of it with pcs.cib_present
Should be run on one cluster node only (there may be races)
Example:
mysql_pcs__cib_pushed_cib_for_galera:
pcs.cib_pushed:
- cibname: cib_for_galera
- scope: None
- extra_args: None
salt.states.pcs.cluster_node_present(name, node, extra_args=None)¶Add a node to the Pacemaker cluster via PCS Should be run on one cluster node only (there may be races) Can only be run on a already setup/added node
Example:
pcs_setup__node_add_node1.example.com:
pcs.cluster_node_present:
- node: node1.example.com
- extra_args:
- '--start'
- '--enable'
salt.states.pcs.cluster_setup(name, nodes, pcsclustername='pcscluster', extra_args=None, pcsuser='hacluster', pcspasswd='hacluster', pcs_auth_extra_args=None, wipe_default=False)¶Setup Pacemaker cluster on nodes. Should be run on one cluster node only to avoid race conditions. This performs auth as well as setup so can be run in place of the auth state. It is recommended not to run auth on Debian/Ubuntu for a new cluster and just to run this because of the initial cluster config that is installed on Ubuntu/Debian by default.
Example:
pcs_setup__setup:
pcs.cluster_setup:
- nodes:
- node1.example.com
- node2.example.com
- pcsclustername: pcscluster
- extra_args:
- '--start'
- '--enable'
- pcsuser: hacluster
- pcspasswd: hoonetorg
salt.states.pcs.constraint_present(name, constraint_id, constraint_type, constraint_options=None, cibname=None)¶Ensure that a constraint is created
Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync
Example:
haproxy_pcs__constraint_present_colocation-vip_galera-haproxy-clone-INFINITY:
pcs.constraint_present:
- constraint_id: colocation-vip_galera-haproxy-clone-INFINITY
- constraint_type: colocation
- constraint_options:
- 'add'
- 'vip_galera'
- 'with'
- 'haproxy-clone'
- cibname: cib_for_haproxy
salt.states.pcs.prop_has_value(name, prop, value, extra_args=None, cibname=None)¶Ensure that a property in the cluster is set to a given value
Should be run on one cluster node only (there may be races)
Example:
pcs_properties__prop_has_value_no-quorum-policy:
pcs.prop_has_value:
- prop: no-quorum-policy
- value: ignore
- cibname: cib_for_cluster_settings
salt.states.pcs.resource_defaults_to(name, default, value, extra_args=None, cibname=None)¶Ensure a resource default in the cluster is set to a given value
Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync
Example:
pcs_properties__resource_defaults_to_resource-stickiness:
pcs.resource_defaults_to:
- default: resource-stickiness
- value: 100
- cibname: cib_for_cluster_settings
salt.states.pcs.resource_op_defaults_to(name, op_default, value, extra_args=None, cibname=None)¶Ensure a resource operation default in the cluster is set to a given value
Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync
Example:
pcs_properties__resource_op_defaults_to_monitor-interval:
pcs.resource_op_defaults_to:
- op_default: monitor-interval
- value: 60s
- cibname: cib_for_cluster_settings
salt.states.pcs.resource_present(name, resource_id, resource_type, resource_options=None, cibname=None)¶Ensure that a resource is created
Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync
Example:
mysql_pcs__resource_present_galera:
pcs.resource_present:
- resource_id: galera
- resource_type: "ocf:heartbeat:galera"
- resource_options:
- 'wsrep_cluster_address=gcomm://node1.example.org,node2.example.org,node3.example.org'
- '--master'
- cibname: cib_for_galera
salt.states.pcs.stonith_present(name, stonith_id, stonith_device_type, stonith_device_options=None, cibname=None)¶Ensure that a fencing resource is created
Should be run on one cluster node only (there may be races) Can only be run on a node with a functional pacemaker/corosync
Example:
pcs_stonith__created_eps_fence:
pcs.stonith_present:
- stonith_id: eps_fence
- stonith_device_type: fence_eps
- stonith_device_options:
- 'pcmk_host_map=node1.example.org:01;node2.example.org:02'
- 'ipaddr=myepsdevice.example.org'
- 'power_wait=5'
- 'verbose=1'
- 'debug=/var/log/pcsd/eps_fence.log'
- 'login=hidden'
- 'passwd=hoonetorg'
- cibname: cib_for_stonith
Docs for previous releases are available on readthedocs.org.
Latest Salt release: 3004.1