High availability (with corosync/pacemaker)
Configuration
The inventory may define these hosts:
observers
: Set of hosts to observe the cluster (only the first is considering)Hypervisors
: Set of machines to hosts VMs.
Remember that the cluster must contain an odd number of machines. For example, three hypervisors or one observer and two hypervisors.
Node redundancy
All nodes in the cluster have access to a shared storage via Ceph (see Shared storage section). With it, the cluster is in N to N redundancy mode.
Corosync will provides messaging and membership services.
Pacemaker will manage the cluster (synchronize resources between each node).
More details on Pacemaker here and Corosync here.
pacemaker-remote
pacemaker-remote is a component which can be installed inside a VM to allow Pacemaker to manage and monitor resources inside this VM.
For instance, with pacemaker-remote pacemaker can monitor services and containers directly inside a VM.
For more information about pacemaker-remote refer to https://clusterlabs.org/pacemaker/doc/2.1/Pacemaker_Remote/singlehtml/.
Management tool
The vm_manager project is a high-level interface of pacemaker and Ceph to manage the VM like a resource. He is installed during the installation step and provides the vm-mgr
command.
Please refer to How to manage VM in SEAPATH for more detais.
Node replacement
In case one of the nodes suffers a difficult to repair situation (lost motherboard for example, or lost disk with no RAID), it might become necessary to replace the server with a blank one.
From the cluster point of view, we will need to remove the old node and add the new one, for both corosync/pacemaker and ceph.
The
ansible/playbooks/replace_machine_remove_machine_from_cluster.yaml
playbook can remove a node from the cluster. For this, themachine_to_remove
should be set to the hostname to remove.
The below command should be launched in the ansible project.cqfd run ansible-playbook -i /path/to/inventory.yaml -e machine_to_remove=HOSTNAME playbooks/replace_machine_remove_machine_from_cluster.yaml
Â
A new host should be installed with the ISO installer and the same hostname, ip address, etc... than the old node.
Make the "cluster network" connections between hosts.
Restart the cluster_setup_debian.yml playbook to configure the new host in the cluster (more details here).
Â
Â