{"id":8293,"date":"2020-09-04T17:05:36","date_gmt":"2020-09-04T16:05:36","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=8293"},"modified":"2020-09-04T17:05:36","modified_gmt":"2020-09-04T16:05:36","slug":"aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/","title":{"rendered":"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8293&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8293&#038;title=AWS%20%3A%20Configurer%20un%20cluster%20PostgreSQL%20HD%20avec%20Corosync%2FPacemaker%20sur%20des%20EC2%20Amazon\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=AWS%20%3A%20Configurer%20un%20cluster%20PostgreSQL%20HD%20avec%20Corosync%2FPacemaker%20sur%20des%20EC2%20Amazon&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8293\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p>Hello<\/p>\n<p>apr\u00e8s ces quelques mois d&#8217;absence li\u00e9s \u00e0 ces contraintes sanitaires, nous voici de retour pour un nouveau sujet AWS PostgreSQL.<br \/>\nNous avons r\u00e9cemment mis en place un cluster Haute Disponibilit\u00e9 chez un de nos clients en utilisant la solution PgPool-II.<\/p>\n<p>Ce produit a la particularit\u00e9 de g\u00e9rer de nombreux services, dont le pooling de connexions, mais aussi la haute disponibilit\u00e9 avec bascule d&#8217;IP virtuelle.<\/p>\n<p>Mais notre sujet du jour est de savoir comment configurer la solution Corosync \/ Pacemaker pour effecteur un &#8220;PostgreSQL Failover Automatic&#8221; sur des EC2 Amazon.<\/p>\n<p>De nombreux articles existent sur Internet pour la mise en place de ces fameux outils que sont Corosync et Pacemaker, bien connus du monde Unix.<br \/>\nCes derniers \u00e9tant largement utilis\u00e9s pour la gestion d&#8217;un serveur Web Apache ou Tomcat ou d&#8217;un serveur d&#8217;application, par exemple.<\/p>\n<p>Le SGBDR PostgreSQL est compatible avec cette solution de haute disponibilit\u00e9, cela en fait un serveur de bases de donn\u00e9es hautement disponible avec bascule rapide et automatique.<\/p>\n<p>&nbsp;<\/p>\n<p>Mais qu&#8217;en est-il de la gestion d&#8217;un PostgreSQL HD dans le cadre de l\u2019utilisation de VM AWS EC2 ?<\/p>\n<p>&nbsp;<\/p>\n<p>Rappelons que pour qu&#8217;une bascule via &#8220;fencing&#8221;, soit transparente, il est largement recommand\u00e9 d&#8217;installer une IP Virtuelle qui se d\u00e9clarera sur le noeud ou l&#8217;instance PostgreSQL est &#8220;primaire&#8221;. La solution Corosync \/ Pacemaker se chargera de basculer automatiquement les ressources associ\u00e9s, notamment PostgreSQL, et ainsi \u00e9viter un &#8220;split brain&#8221; (2 instances actives en m\u00eame temps).<\/p>\n<p>&nbsp;<\/p>\n<h1>Le cluster Haute disponibilit\u00e9.<\/h1>\n<p>Dans notre exemple, nous partons avec 2 VM EC2 Amazon de type CentOS 7 install\u00e9es sur un subnet priv\u00e9.<br \/>\nCes VM ne sont donc pas accessibles via Internet, ce qui est recommand\u00e9 par les bonnes pratiques en mati\u00e8re de s\u00e9curit\u00e9 dans le cadre d&#8217;une architecture AWS\u00a0 pour un cluster de bases de donn\u00e9es.<\/p>\n<p>L&#8217;id\u00e9e est d&#8217;avoir ce sch\u00e9ma de connexions pour la mise en place de notre cluster<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-8286 size-full aligncenter\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/HD.jpg\" alt=\"\" width=\"586\" height=\"783\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/HD.jpg 586w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/HD-225x300.jpg 225w\" sizes=\"auto, (max-width: 586px) 100vw, 586px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>Dans ce sch\u00e9ma, les IPs sont donn\u00e9es \u00e0 titre d&#8217;exemple.<br \/>\nTout au long de cette pr\u00e9sentation, nous travaillerons avec 2 VMs Amazon ayant les caract\u00e9ristiques suivantes :<\/p>\n<ul>\n<li>1 VM portant le nom ip-172-44-2-226 avec l&#8217;IP priv\u00e9e 172.44.2.226<\/li>\n<li>1 VM portant le nom ip-172-44-2-143 avec l&#8217;IP priv\u00e9e 172.44.2.143<\/li>\n<li>1 IP virtuelle d\u00e9clar\u00e9e avec la valeur 172.44.2.144<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h3><em>Les pr\u00e9requis\u00a0<\/em><\/h3>\n<h4><span style=\"color: #0000ff;\">PostgreSQL12 Streaming Replication<\/span><\/h4>\n<p>Nous avons install\u00e9s un cluster PostgreSQL version 12.4, avec la mise en place de la Streaming Replication.<br \/>\nN&#8217;h\u00e9sitez pas \u00e0 consulter <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-la-streaming-replication-en-12\/\">cet article<\/a> pour connaitre les \u00e9tapes \u00e0 suivre pour une mise en oeuvre.<\/p>\n<h4><span style=\"color: #0000ff;\">Configuration ssh<\/span><\/h4>\n<p>Cr\u00e9er les cl\u00e9s SSH pour root et pour postgres afin que les 2 serveurs n&#8217;aient pas \u00e0 saisir de password pour se connecter.<br \/>\nCeci est \u00e0 r\u00e9aliser sur les 2 VMs EC2<\/p>\n<pre># cd ~\/.ssh\r\n# ssh-keygen -t rsa -f id_rsa\r\n# ssh-copy-id -i id_rsa.pub root@172.44.2.143\r\n# ssh-copy-id -i id_rsa.pub root@172.44.2.226\r\n\r\npostgres $ cd ~\/.ssh\r\npostgres $ ssh-keygen -t rsa -f id_rsa\r\npostgres $ ssh-copy-id -i id_rsa.pub postgres@172.44.2.143\r\npostgres $ ssh-copy-id -i id_rsa.pub postgres@172.44.2.226<\/pre>\n<p>&nbsp;<\/p>\n<h4><span style=\"color: #0000ff;\">Configuration AWS<\/span><\/h4>\n<p><span style=\"color: #008000;\"><strong>Service IAM<\/strong><\/span><\/p>\n<p>Comme nous sommes sur des instances EC2 Amazon, nous devrons mettre \u00e0 jour nos strat\u00e9gies pour que nos VM puissent ex\u00e9cuter des op\u00e9rations sur les instances (arr\u00eat, red\u00e9marrage, changer une IP &#8230;). Nous verrons par la suite, en quoi consistent ces op\u00e9rations.<br \/>\nDans la mesure du possible, et afin que vos instances EC2 respectent les fondamentaux en terme de s\u00e9curit\u00e9, il faudra vous cr\u00e9er un r\u00f4le IAM que vous affecterez \u00e0 vos 2 instances EC2.<\/p>\n<p>Dans ce r\u00f4le, il faudra d\u00e9finir une nouvelle strat\u00e9gie, dans laquelle, chaque instance pourra effectuer les actions suivantes :<\/p>\n<ul>\n<li>d\u00e9marrer une instance<\/li>\n<li>arr\u00eater une instance<\/li>\n<li>reboot d&#8217;une instance<\/li>\n<li>affecter\/d\u00e9saffecter une nouvelle IP secondaire<\/li>\n<li>lire les caract\u00e9ristiques d&#8217;un instance.<\/li>\n<\/ul>\n<p>Cette strat\u00e9gie pourra comporter le document JSON suivant pour notre exemple :<\/p>\n<pre>{\r\n\u00a0\u00a0\u00a0 \"Version\": \"2012-10-17\",\r\n\u00a0\u00a0\u00a0 \"Statement\": [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 {\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"Sid\": \"VisualEditor0\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"Effect\": \"Allow\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"Action\": [\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:RebootInstances\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:DescribeInstances\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:DetachNetworkInterface\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:StartInstances\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:RunInstances\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:AssignPrivateIpAddresses\",\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"ec2:UnassignPrivateIpAddresses\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:AssociateAddress\",\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"ec2:StopInstances\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ],\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"Resource\": \"*\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 }\r\n\u00a0\u00a0\u00a0 ]\r\n}<\/pre>\n<p><strong><span style=\"color: #008000;\">Installer AWS CLI sur les EC2\u00a0<\/span><\/strong><\/p>\n<p>Comme le cluster aura besoin d&#8217;agir directement par lignes de commandes sur les instances EC2 Amazon, il nous faudra l&#8217;outil AWS CLI sur les 2 VM.<br \/>\nNous pourrons l&#8217;installer gr\u00e2ce aux commandes suivantes :<\/p>\n<pre># curl \"https:\/\/awscli.amazonaws.com\/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\r\n# unzip awscliv2.zip\r\n# cd aws\r\n# .\/install\r\n# aws --version<\/pre>\n<p>Notez que l&#8217;outil awscli est install\u00e9 dans \/usr\/local\/bin. Nous aurons besoin de connaitre ce &#8220;PATH&#8221; pour la suite.<br \/>\nUne fois correctement configur\u00e9, lancez un &#8220;aws configure&#8221;. Puis interroger les caract\u00e9ristiques de votre instance pour valider la communication<\/p>\n<p>Par exemple, ici, connaitre le nom AWS de notre adaptateur r\u00e9seau de notre instance AWS.<\/p>\n<pre># Instance_ID=`\/usr\/bin\/curl --silent http:\/\/169.254.169.254\/latest\/meta-data\/instance-id`\r\n# aws ec2 describe-instances --instance-ids $Instance_ID --region eu-west-3 | grep NetworkInterfaceId | awk -F '\"' '{print $4}'\r\neni-0e2b4ab8b3f9bc1fc\r\n\r\n<\/pre>\n<p>Nos informations nous sont bien remont\u00e9es, nous pouvons alors d\u00e9buter l&#8217;installation du cluster.<\/p>\n<p>&nbsp;<\/p>\n<h1>Installation des composants du cluster<\/h1>\n<p>Chacune des actions suivantes devra \u00eatre effectu\u00e9e sur les 2 serveurs. Utiliser le compte <span style=\"color: #ff0000;\"><strong>root<\/strong> <\/span>pour l&#8217;installation des packages avec yum (Centos\/Red Hat).<\/p>\n<pre># yum install -y pacemaker resource-agents pcs fence-agents-all<\/pre>\n<p>Cela installera Corosync et Pacemaker ainsi que l&#8217;outil PCS qui sera tr\u00e8s utile pour l\u2019administration de notre cluster PGHD.<br \/>\nPar la suite il faudra installer le package li\u00e9 directement \u00e0 PostgreSQL Failover Automatic<\/p>\n<pre># yum install -y resource-agents-paf<\/pre>\n<p>et surtout, le package utilis\u00e9 dans le cadre d&#8217;un cluster sur des EC2 Amazon<\/p>\n<pre># yum install -y resource-agents-aws<\/pre>\n<p>L&#8217;installation de Corosync et Pacemaker a \u00e9galement cr\u00e9\u00e9 un nouveau compte syst\u00e8me nomm\u00e9 &#8220;hacluster&#8221;. Celui ci est utilis\u00e9 afin que tous les membres du cluster puissent communiquer ensemble.<br \/>\nIl faudra lui d\u00e9finir un password.<\/p>\n<pre># password hacluster<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous utiliserons l&#8217;outil PCS lanc\u00e9 via le daemon pcsd pour cr\u00e9er, configurer et administrer le cluster.<br \/>\nCelui ci se chargera de modifier les fichiers de conf de Corosync, Pacemaker, mais aussi les ordres envoy\u00e9s \u00e0 AWS notamment pour le fencing.<\/p>\n<p>Nous pourrons ainsi d\u00e9sactiver le d\u00e9marrage automatique des autres services, et ne laisser que pcsd actif lors du reboot de la VM EC2.<\/p>\n<pre># systemctl disable postgresql-12.service\r\n# systemctl disable corosync\r\n# systemctl disable pacemaker\r\n# systemctl enable pcsd.service\r\n# systemctl start pcsd.service<\/pre>\n<h2><\/h2>\n<h2>Configuration du cluster<\/h2>\n<p>Notre cluster est maintenant install\u00e9, mais il n&#8217;est pas configur\u00e9.<br \/>\nLa commande &#8220;pcs status&#8221; permet de v\u00e9rifier l&#8217;\u00e9tat de celui ci et des ressources qui lui sont rattach\u00e9s.<\/p>\n<p>S&#8217;assurer que le daemon pcsd est bien d\u00e9marr\u00e9<\/p>\n<pre># <span style=\"color: #993366;\">systemctl status pcsd.service<\/span>\r\n\u25cf pcsd.service - PCS GUI and remote configuration interface\r\nLoaded: loaded (\/usr\/lib\/systemd\/system\/pcsd.service; enabled; vendor preset: disabled)\r\nActive: active (running) since Thu 2020-09-03 12:15:33 UTC; 2h 45min ago\r\nDocs: man:pcsd(8)\r\nman:pcs(8)\r\nMain PID: 1129 (pcsd)\r\nCGroup: \/system.slice\/pcsd.service\r\n\u251c\u2500 1129 \/usr\/bin\/ruby \/usr\/lib\/pcsd\/pcsd\r\n\u251c\u250013362 \/usr\/bin\/python2 -Es \/usr\/sbin\/pcs cluster stop --pacemaker --force\r\n\u2514\u250013365 \/bin\/systemctl stop pacemaker<\/pre>\n<p>&nbsp;<\/p>\n<h3><em>Cr\u00e9ation du cluster<\/em><\/h3>\n<p>&nbsp;<\/p>\n<p>V\u00e9rifier tout d&#8217;abord que les 2 VM communiquent bien ensemble via les cl\u00e9s SSH mises en place.<\/p>\n<pre># <span style=\"color: #993366;\">pcs cluster auth ip-172-44-2-226 ip-172-44-2-143 -u hacluster<\/span>\r\nPassword: \r\nip-172-44-2-226: Authorized\r\nip-172-44-2-143: Authorized<\/pre>\n<p>Nous sommes pr\u00eats \u00e0 configurer le cluster<\/p>\n<pre># <span style=\"color: #993366;\">pcs cluster setup --name cluster_pghd12 ip-172-44-2-226 ip-172-44-2-143<\/span>\r\nDestroying cluster on nodes: ip-172-44-2-226, ip-172-44-2-143...\r\nip-172-44-2-226: Stopping Cluster (pacemaker)...\r\nip-172-44-2-143: Stopping Cluster (pacemaker)...\r\nip-172-44-2-226: Successfully destroyed cluster\r\nip-172-44-2-143: Successfully destroyed cluster\r\n\r\nSending 'pacemaker_remote authkey' to 'ip-172-44-2-226', 'ip-172-44-2-143'\r\nip-172-44-2-226: successful distribution of the file 'pacemaker_remote authkey'\r\nip-172-44-2-143: successful distribution of the file 'pacemaker_remote authkey'\r\nSending cluster config files to the nodes...\r\nip-172-44-2-226: Succeeded\r\nip-172-44-2-143: Succeeded\r\n\r\nSynchronizing pcsd certificates on nodes ip-172-44-2-226, ip-172-44-2-143...\r\nip-172-44-2-226: Success\r\nip-172-44-2-143: Success\r\nRestarting pcsd on the nodes in order to reload the certificates...\r\nip-172-44-2-226: Success\r\nip-172-44-2-143: Success<\/pre>\n<p>Red\u00e9marrer le cluster complet<\/p>\n<pre># <span style=\"color: #993366;\">pcs cluster start --all<\/span>\r\nip-172-44-2-226: Starting Cluster...\r\nip-172-44-2-143: Starting Cluster...<\/pre>\n<p>&nbsp;<\/p>\n<h3><em>Configurer le fencing<\/em><\/h3>\n<p>La partie la plus critique pour un cluster est la partie fencing. Cette op\u00e9ration consiste a \u00e9vincer un n\u0153ud qui serait d\u00e9clar\u00e9 en \u00e9chec.<br \/>\nCeci est effectu\u00e9 afin d&#8217;\u00e9viter le &#8220;split brain&#8221;, dans le cas d&#8217;un serveur de bases de donn\u00e9es, cela pourrait \u00eatre tr\u00e8s probl\u00e9matique si une m\u00e9thode de fencing n&#8217;est pas en place, car cela signifie que les 2 instances pourraient\u00a0 se d\u00e9synchroniser et \u00eatre vues comme autonomes.<\/p>\n<p>La m\u00e9thode de fencing, forte mais radicale, est STONITH (Shoot The Other Node In The Head), le n\u0153ud d\u00e9clar\u00e9 en \u00e9chec est tout simplement red\u00e9marr\u00e9.<\/p>\n<p>Tout d&#8217;abord, rep\u00e9rer l&#8217;id des 2 instance EC2 via la commande \u00e0 lancer sur les 2 VMs :<\/p>\n<pre># curl --silent http:\/\/169.254.169.254\/latest\/meta-data\/instance-id<\/pre>\n<p>Une fois celles ci trouv\u00e9es, lancer la cr\u00e9ation du fencing :<\/p>\n<pre><span style=\"color: #333333;\"># pcs cluster cib cluster1.xml\r\n\r\n<\/span><span style=\"color: #333333;\"># pcs stonith create clusterfence fence_aws region=eu-west-3 \\\r\n<\/span>  pcmk_host_map=\"ip-172-44-2-226:i-0a72fb68be681f101;ip-172-44-2-143:i-0c6a7c72efad7e65e\" \\\r\n  power_timeout=240 pcmk_reboot_timeout=480 pcmk_reboot_retries=3<\/pre>\n<pre><span style=\"color: #333333;\"># pcs cluster cib-push cluster1.xml<\/span><\/pre>\n<p>Le fencing sera actif uniquement pour ces 2 instances EC2. La configuration a \u00e9t\u00e9 inscrite dans le fichier XML du cluster.<\/p>\n<pre># <strong><span style=\"color: #993366;\">pcs status<\/span><\/strong>\r\nCluster name: cluster_pghd12\r\nStack: corosync\r\nCurrent DC: ip-172-44-2-143 (version 1.1.21-4.el7-f14e36fd43) - partition with quorum\r\nLast updated: Thu Sep 3 11:53:24 2020\r\nLast change: Thu Sep 3 10:04:28 2020 by root via crm_resource on ip-172-44-2-226\r\n\r\n2 nodes configured\r\n1 resource configured\r\n\r\nOnline: [ ip-172-44-2-143 ip-172-44-2-226 ]\r\n\r\nFull list of resources:\r\n\r\nclusterfence (stonith:fence_aws): Started ip-172-44-2-143\r\n\r\nDaemon Status:\r\ncorosync: active\/enabled\r\npacemaker: active\/enabled\r\npcsd: active\/enabled<\/pre>\n<p>&nbsp;<\/p>\n<h3><em>Cr\u00e9ation des ressources cluster<\/em><\/h3>\n<p>Il s&#8217;agira de cr\u00e9er 4 ressources utilis\u00e9es pour la prise en charge de PostgreSQL Automatic Failover<\/p>\n<ul>\n<li><span style=\"color: #800000;\"><strong>pgsqld<\/strong> <\/span>pour la partie propri\u00e9t\u00e9s de l&#8217;instance PostgreSQL, les informations sur les binaires, le PGDATA, le port et la configuration<\/li>\n<li><span style=\"color: #800000;\"><strong>pgsql-ha<\/strong><\/span> qui est la ressource qui se charge de surveiller le processus <strong><span style=\"color: #800000;\">pgsqld<\/span> <\/strong>, organise le failover si besoin et son placement en tant que ma\u00eetre\/esclave.<\/li>\n<li><strong><span style=\"color: #800000;\">pgsql-master-ip<\/span><\/strong> qui est la ressource qui se charge de monter l&#8217;IP virtuelle sur le noeud local ou PostgreSQL doit \u00eatre primaire<\/li>\n<li><strong><span style=\"color: #800000;\">pgsql-awsvip<\/span> <\/strong>qui fera la transmission de l&#8217;information cot\u00e9 AWS et affectera l&#8217;IP Virtuelle en tant que IP secondaire \u00e0 l&#8217;instance EC2 qui porte l&#8217;instance PostgreSQL primaire.<\/li>\n<\/ul>\n<p>Les cr\u00e9ations s&#8217;effectueront dans l&#8217;ordre suivant :<\/p>\n<pre># pcs cluster cib cluster1.xml\r\n# pcs resource create pgsqld ocf:heartbeat:pgsqlms \\\r\nbindir=\/usr\/pgsql-12\/bin pgdata=\/data\/postgres\/12 op start timeout=60s \\\r\nop stop timeout=60s \\\r\nop promote timeout=30s \\\r\nop demote timeout=120s \\\r\nop monitor interval=15s timeout=10s role=\"Master\" \\\r\nop monitor interval=16s timeout=10s role=\"Slave\" \\\r\nop notify timeout=60s<\/pre>\n<p>A noter que depuis la version 12 de PostgreSQL, il n&#8217;est plus n\u00e9cessaire de configurer le param\u00e8tre &#8220;recovery_template&#8221;, car il n&#8217;y a plus besoin de fichier &#8220;recovery.conf&#8221;. Les informations seront directement lues dans le fichier &#8220;postgresql.auto.conf&#8221;.<br \/>\nSeul le param\u00e8tre &#8220;application_name&#8221; devra appara\u00eetre avec le nom dns, ou l&#8217;IP du serveur sur lequel l&#8217;instance tourne.<\/p>\n<p>Exemple pour le serveur ip-172-44-2-143<\/p>\n<pre>   $ <span style=\"color: #993366;\">cat postgresql.auto.conf<\/span>\r\n   primary_conninfo = 'user=repli passfile=\/var\/lib\/pgsql\/.pgpass host=172.44.2.226 port=5432 <span style=\"color: #3366ff;\"><strong>application_name=ip-172-44-2-143<\/strong><\/span>'\r\n   recovery_target_timeline='latest'\r\n   restore_command='scp 172.44.2.226:\/data\/postgres\/archives\/%f %p'<\/pre>\n<p>Une fois les informations concernant l&#8217;instance PostgreSQL cr\u00e9\u00e9s, nous cr\u00e9erons la ressource haute disponibilit\u00e9.<\/p>\n<pre># pcs resource master pgsql-ha pgsqld notify=true<\/pre>\n<p>&nbsp;<\/p>\n<p>Puis la partie Virtuelle IP noeud local et instance AWS. Nous affecterons ces 2 ressources dans un m\u00eame groupe nomm\u00e9 &#8220;aws-group&#8221;.<\/p>\n<pre># pcs resource create pgsql-master-ip ocf:heartbeat:IPaddr2 ip=172.44.2.144 nic=eth0:1 cidr_netmask=24 op monitor interval=10s\r\n\r\n# pcs resource create pgsql_awsvip ocf:heartbeat:awsvip awscli=\/usr\/local\/bin\/aws secondary_private_ip=172.44.2.144 --group aws-group\r\n\r\n# pcs resource group add aws-group pgsql-master-ip<\/pre>\n<p>&nbsp;<\/p>\n<p>Afin de configurer les d\u00e9pendances entre ces ressources, nous devrons nous appuyer sur les param\u00e8tres &#8220;colocation&#8221; et &#8220;order&#8221;.<br \/>\nLa VIP doit d\u00e9pendre du d\u00e9marrage du n\u0153ud primaire.<\/p>\n<pre># pcs constraint colocation add pgsql-master-ip with master pgsql-ha INFINITY\r\n# pcs constraint colocation add pgsql_awsvip with master pgsql-ha INFINITY\r\n<\/pre>\n<p>&nbsp;<\/p>\n<p>Ordre de d\u00e9marrage en cas de &#8220;promote&#8221; de l&#8217;instance.<\/p>\n<pre># pcs constraint order promote pgsql-ha then start pgsql-master-ip symmetrical=false kind=Mandatory\r\n# pcs constraint order promote pgsql-ha then start pgsql_awsvip symmetrical=false kind=Mandatory<\/pre>\n<p>&nbsp;<\/p>\n<p>Ordre d&#8217;arr\u00eat en cas de passage en mode standby.<\/p>\n<pre># pcs constraint order demote pgsql-ha then stop pgsql-master-ip symmetrical=false kind=Mandatory\r\n# pcs constraint order demote pgsql-ha then stop pgsql_awsvip symmetrical=false kind=Mandatory\r\n# pcs cluster cib-push cluster1.xml<\/pre>\n<p>&nbsp;<\/p>\n<p>Cette configuration devra donc donner les informations suivantes avec un pcs status :<\/p>\n<pre># <strong><span style=\"color: #993366;\">pcs status --full<\/span><\/strong>\r\nCluster name: cluster_pghd12\r\nStack: corosync\r\nCurrent DC: ip-172-44-2-226 (1) (version 1.1.21-4.el7-f14e36fd43) - partition with quorum\r\nLast updated: Fri Sep 4 10:15:16 2020\r\nLast change: Fri Sep 4 10:12:37 2020 by root via cibadmin on ip-172-44-2-143\r\n\r\n2 nodes configured\r\n5 resources configured\r\n\r\nOnline: [ ip-172-44-2-143 (2) ip-172-44-2-226 (1) ]\r\n\r\nFull list of resources:\r\n\r\nMaster\/Slave Set: pgsql-ha [pgsqld]\r\npgsqld (ocf::heartbeat:pgsqlms): Slave ip-172-44-2-226\r\npgsqld (ocf::heartbeat:pgsqlms): Master ip-172-44-2-143\r\nMasters: [ ip-172-44-2-143 ]\r\nSlaves: [ ip-172-44-2-226 ]\r\nclusterfence (stonith:fence_aws): Started ip-172-44-2-143\r\nResource Group: aws-group\r\npgsql_awsvip (ocf::heartbeat:awsvip): Started ip-172-44-2-143\r\npgsql-master-ip (ocf::heartbeat:IPaddr2): Started ip-172-44-2-143\r\n\r\nNode Attributes:\r\n* Node ip-172-44-2-143 (2):\r\n+ master-pgsqld : 1001\r\n* Node ip-172-44-2-226 (1):\r\n+ master-pgsqld : 1000\r\n\r\nPCSD Status:\r\nip-172-44-2-143: Online\r\nip-172-44-2-226: Online\r\n\r\nDaemon Status:\r\ncorosync: active\/disabled\r\npacemaker: active\/disabled\r\npcsd: active\/enabled<\/pre>\n<p>&nbsp;<\/p>\n<p>Dans cette configuration, nous voyons bien que le n\u0153ud primaire est ip-172-44-2-143, celui ci porte la VIP syst\u00e8me et la seconde IP AWS.<br \/>\nL&#8217;instance PostgreSQL est primaire sur ce noeud. V\u00e9rifions les informations\u00a0 :<\/p>\n<pre># <span style=\"color: #993366;\">ifconfig<\/span>\r\neth0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;\u00a0 mtu 9001\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 inet 172.44.2.143\u00a0 netmask 255.255.255.0\u00a0 broadcast 172.44.2.255\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 inet6 fe80::845:a9ff:fec9:3e \u00a0prefixlen 64\u00a0 scopeid 0x20&lt;link&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ether 0a:45:a9:c9:00:3e\u00a0 txqueuelen 1000\u00a0 (Ethernet)\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 RX packets 57218\u00a0 bytes 54966962 (52.4 MiB)\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 RX errors 0\u00a0 dropped 0\u00a0 overruns 0\u00a0 frame 0\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 TX packets 42097\u00a0 bytes 5473829 (5.2 MiB)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0TX errors 0\u00a0 dropped 0 overruns 0\u00a0 carrier 0\u00a0 collisions 0\r\n\r\n\r\neth0:1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;\u00a0 mtu 9001\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 inet 172.44.2.144\u00a0 netmask 255.255.255.0\u00a0 broadcast 172.44.2.255\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ether 0a:45:a9:c9:00:3e\u00a0 txqueuelen 1000\u00a0 (Ethernet)<\/pre>\n<p>&nbsp;<\/p>\n<p>et sur AWS<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-8311 size-full\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/ipsec_143.jpg\" alt=\"\" width=\"432\" height=\"94\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/ipsec_143.jpg 432w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/ipsec_143-300x65.jpg 300w\" sizes=\"auto, (max-width: 432px) 100vw, 432px\" \/><\/p>\n<p>&nbsp;<\/p>\n<pre>[root@ip-172-44-2-143:0 ~]# Instance_ID=`\/usr\/bin\/curl --silent http:\/\/169.254.169.254\/latest\/meta-data\/instance-id`\r\n[root@ip-172-44-2-143:0 ~]# aws ec2 describe-instances --instance-ids\u00a0 $Instance_ID --region eu-west-3 | grep \"PrivateIpAddress\"\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"PrivateIpAddress\": \"172.44.2.143\",\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddress\": \"172.44.2.143\",\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddresses\": [\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddress\": \"172.44.2.143\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\"PrivateIpAddress\": \"172.44.2.144\"\r\n\r\n\r\n\r\n\r\n<\/pre>\n<p>S&#8217;il l&#8217;on lance une connexion depuis un client PostgreSQL quelconque (exemple d&#8217;un VM dont l&#8217;IP est dans le m\u00eame subnet, \u00e0 savoir 172.44.2.194)<\/p>\n<pre>[postgres@ip-172-44-2-194 ~]$ <span style=\"color: #993366;\">psql -h 172.44.2.144 -U manu manuelo<\/span>\r\nPassword for user manu:\r\npsql (12.0, server 12.4)\r\nType \"help\" for help.\r\n\r\nmanuelo=# select usename,client_addr,query from pg_stat_activity where usename is not null;\r\n usename   | client_addr  | query\r\n\u00a0----------+--------------+-----------------------------------------------------------------------------------\u00a0\r\n postgres  |              |\r\n repli     | 172.44.2.226 |\r\n manu      | 172.44.2.194 | select usename,client_addr,query from pg_stat_activity where usename is not null;\r\n\u00a0\r\n(3 rows)<\/pre>\n<p>Nous avons donc r\u00e9aliser notre connexion sur la VIP 172.44.2.144 depuis une autre VM EC2 Amazon du m\u00eame subnet.<br \/>\nNous voyons \u00e9galement que la r\u00e9plication, via le user &#8220;repli&#8221; est active depuis l&#8217;autre noeud du cluster portant l&#8217;IP 172.44.2.226.<\/p>\n<p>Il suffira de jouer avec les &#8220;security groups&#8221; Amazon afin d&#8217;autoriser la connexion \u00e0 la VIP pour d&#8217;autres VMs d&#8217;un subnet diff\u00e9rent et pouvoir ainsi autoriser les connexions.<\/p>\n<p>&nbsp;<\/p>\n<h2>Exploitation du cluster<\/h2>\n<h3><em><span style=\"color: #333333;\">Cas d&#8217;une bascule d&#8217;instance PostgreSQL<\/span><\/em><\/h3>\n<p>Il sera tout \u00e0 fait possible d&#8217;effectuer un switchover d&#8217;instance PostgreSQL.<br \/>\nRappelons que pour notre exemple, l&#8217;instance primaire est sur ip-172-44-2-143 et la standby sur ip-172-44-2-226 :<\/p>\n<pre># pcs status --full\r\n\u2026..\r\n Master\/Slave Set: pgsql-ha [pgsqld]\r\n \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Slave ip-172-44-2-226\r\n<strong> \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Master ip-172-44-2-143<\/strong>\r\n \u00a0\u00a0\u00a0 Masters: [ ip-172-44-2-143 ]\r\n \u00a0\u00a0\u00a0 Slaves: [ ip-172-44-2-226 ]\r\nclusterfence\u00a0\u00a0 (stonith:fence_aws):\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n Resource Group: aws-group\r\n \u00a0\u00a0\u00a0 pgsql_awsvip\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:awsvip):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n\u00a0\u00a0\u00a0\u00a0 pgsql-master-ip\u00a0\u00a0\u00a0 (ocf::heartbeat:IPaddr2):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-143<\/pre>\n<p>&nbsp;<\/p>\n<p>S&#8217;il l&#8217;on souhaite inverser les r\u00f4les, donc passer l&#8217;instance primaire sur ip-172-44-2-226, et la standby sur ip-172-44-2-143, nous lancerons la commande suivante;<\/p>\n<pre># pcs resource move --wait --master pgsql-ha\r\nResource 'pgsql-ha' is master on node ip-172-44-2-226; slave on node ip-172-44-2-143.<\/pre>\n<p>&nbsp;<\/p>\n<p>Dans le cas d&#8217;un cluster 3 noeuds et plus, ajouter \u00e0 la fin de la commande, le nom du noeud.<br \/>\nRegardons l&#8217;\u00e9tat du cluster, et de l&#8217;instance PostgreSQL :<\/p>\n<pre>[root@ip-172-44-2-226:0 ~]# pcs status --full\r\n...\r\n Master\/Slave Set: pgsql-ha [pgsqld]\r\n <strong>\u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Master ip-172-44-2-226<\/strong>\r\n \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Slave ip-172-44-2-143\r\n \u00a0\u00a0\u00a0 Masters: [ ip-172-44-2-226 ]\r\n \u00a0\u00a0\u00a0 Slaves: [ ip-172-44-2-143 ]\r\n clusterfence\u00a0\u00a0 (stonith:fence_aws):\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n Resource Group: aws-group\r\n \u00a0\u00a0\u00a0 pgsql_awsvip\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:awsvip):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-226\r\n\u00a0\u00a0\u00a0\u00a0 pgsql-master-ip\u00a0\u00a0\u00a0 (ocf::heartbeat:IPaddr2):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-226<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous remarquons \u00e9galement que la VIP locale et l&#8217;adresse IP secondaire Amazon ont bascul\u00e9es vers ip-172.44.2.226<\/p>\n<pre># ifconfig\r\n\r\neth0: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;\u00a0 mtu 9001\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 inet 172.44.2.226\u00a0 netmask 255.255.255.0\u00a0 broadcast 172.44.2.255\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 inet6 fe80::8ae:d4ff:feac:9526\u00a0 prefixlen 64\u00a0 scopeid 0x20&lt;link&gt;\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ether 0a:ae:d4:ac:95:26\u00a0 txqueuelen 1000\u00a0 (Ethernet)\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 RX packets 518354\u00a0 bytes 389187631 (371.1 MiB)\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 RX errors 0\u00a0 dropped 0\u00a0 overruns 0\u00a0 frame 0\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 TX packets 394871\u00a0 bytes 219328497 (209.1 MiB)\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 TX errors 0\u00a0 dropped 0 overruns 0\u00a0 carrier 0\u00a0 collisions 0\r\n\r\n\r\neth0:1: flags=4163&lt;UP,BROADCAST,RUNNING,MULTICAST&gt;\u00a0 mtu 9001\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 inet 172.44.2.144\u00a0 netmask 255.255.255.0\u00a0 broadcast 172.44.2.255\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 ether 0a:ae:d4:ac:95:26\u00a0 txqueuelen 1000\u00a0 (Ethernet)<\/pre>\n<p>&nbsp;<\/p>\n<p>et sur AWS<\/p>\n<p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-8310 size-full\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/ipsec_226.jpg\" alt=\"\" width=\"426\" height=\"87\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/ipsec_226.jpg 426w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/ipsec_226-300x61.jpg 300w\" sizes=\"auto, (max-width: 426px) 100vw, 426px\" \/><\/p>\n<pre>#\u00a0 aws ec2 describe-instances --instance-id $Instance_ID --region eu-west-3 | grep \"PrivateIpAddress\"\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddress\": \"172.44.2.226\",\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u00a0\u00a0\u00a0\u00a0\"PrivateIpAddress\": \"172.44.2.226\",\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddresses\": [\r\n \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddress\": \"172.44.2.226\"\r\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \"PrivateIpAddress\": \"172.44.2.144\"<\/pre>\n<p>&nbsp;<\/p>\n<p>Une connexion cliente viendra valider notre bascule. Notre connexion &#8220;repli&#8221; de l&#8217;instance standby se fait depuis ip-172-44-2-143.<\/p>\n<p>&nbsp;<\/p>\n<pre>[postgres@ip-172-44-2-194 ~]$ psql -h 172.44.2.144 -U manu manuelo\r\nPassword for user manu:\r\npsql (12.0, server 12.4)\r\nType \"help\" for help.\r\n\r\nmanuelo=# select usename,client_addr,query from pg_stat_activity where usename is not null;\r\n usename  | client_addr  | query\r\n ---------+--------------+-----------------------------------------------------------------------------------\r\n repli    | 172.44.2.143 |\r\n postgres |              |\r\n manu     | 172.44.2.194 | select usename,client_addr,query from pg_stat_activity where usename is not null;\r\n\r\n(3 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<h3><em>fencing de noeud<\/em><\/h3>\n<p>Lorsque nous avons effectu\u00e9 notre bascule de la ressource &#8220;pgsql-ha&#8221;, nous avons vu que nous sommes pass\u00e9s, pour l&#8217;instance PostgreSQL et la VIP, du noeud ip-172-44-2-143 vers ip-172-44-2-226.<\/p>\n<p>En revanche, ce que nous voyons \u00e9galement, c&#8217;est que la ressource &#8220;clusterfence&#8221; est rest\u00e9e en mode master sur ip-172-44-2-143.<\/p>\n<p>Cela veut dire que, au red\u00e9marrage complet du cluster, c&#8217;est ce n\u0153ud qui est configur\u00e9 comme master, et donc qui reprendra l&#8217;instance PostgreSQL en tant que primaire.<\/p>\n<p>S&#8217;il l&#8217;on tente un fencing sur l&#8217;instance EC2 AWS ip-172-44-2-143.<\/p>\n<pre># pcs stonith fence ip-172-44-2-143\r\nNode: ip-172-44-2-143 fenced<\/pre>\n<p>Nous perdons imm\u00e9diatement la connexion au serveur ip-172-44-2-143. Celui ci doit red\u00e9marrer.<br \/>\nAu red\u00e9marrage, celui ci revient, mais rien ne change, car &#8220;pgsql-ha&#8221; \u00e9tait d\u00e9j\u00e0 en mode primaire sur ip-172-44-2-226.<\/p>\n<pre>Full list of resources:\r\n\r\n Master\/Slave Set: pgsql-ha [pgsqld]\r\n<strong> \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Master ip-172-44-2-226<\/strong>\r\n \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Slave ip-172-44-2-143\r\n \u00a0\u00a0\u00a0 Masters: [ ip-172-44-2-226 ]\r\n \u00a0\u00a0\u00a0 Slaves: [ ip-172-44-2-143 ]\r\n clusterfence\u00a0\u00a0 (stonith:fence_aws):\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n Resource Group: aws-group\r\n \u00a0\u00a0\u00a0 pgsql_awsvip\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:awsvip):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-226\r\n\u00a0\u00a0\u00a0\u00a0 pgsql-master-ip\u00a0\u00a0\u00a0 (ocf::heartbeat:IPaddr2):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-226<\/pre>\n<p>&nbsp;<\/p>\n<p>Il n&#8217;y a donc eu aucune interruption sur l&#8217;instance PostgreSQL.<br \/>\nOn peut d&#8217;ailleurs voir , avec la commande &#8220;pcs status &#8211;full&#8221;, l&#8217;historique de fencing pour ce cluster<\/p>\n<pre>Fencing History:\r\n* reboot of ip-172-44-2-143 successful: delegate=ip-172-44-2-226, client=stonith_admin.24029, origin=ip-172-44-2-226,\r\ncompleted='Fri Sep 4 12:56:55 2020'<\/pre>\n<p>&nbsp;<\/p>\n<p>Mais que se passe-t-il si c&#8217;est le n\u0153ud primaire qui &#8220;fence&#8221; ?<\/p>\n<p>&nbsp;<\/p>\n<pre>[root@ip-172-44-2-143:0 ~]# pcs stonith fence ip-172-44-2-226\r\nNode: ip-172-44-2-226 fenced<\/pre>\n<p>&nbsp;<\/p>\n<p>A noter que la commande met un peu plus de temps pour rendre la main, les op\u00e9rations en cours sont plus nombreuses.<br \/>\n&#8230;<\/p>\n<p>Et gr\u00e2ce \u00e0 PAF , aussit\u00f4t \u00e7a bascule !!<\/p>\n<p>&nbsp;<\/p>\n<pre>[root@ip-172-44-2-143:0 ~]# pcs status --full\r\n\u2026.\r\nOnline: [ ip-172-44-2-143 (2) ]\r\nOFFLINE: [ ip-172-44-2-226 (1) ]\r\n\r\nFull list of resources:\r\n\r\n Master\/Slave Set: pgsql-ha [pgsqld]\r\n<strong> \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Master ip-172-44-2-143<\/strong>\r\n \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Stopped\r\n \u00a0\u00a0\u00a0 Masters: [ ip-172-44-2-143 ]\r\n \u00a0\u00a0\u00a0 Stopped: [ ip-172-44-2-226 ]\r\n clusterfence\u00a0\u00a0 (stonith:fence_aws):\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n Resource Group: aws-group\r\n \u00a0\u00a0\u00a0 pgsql_awsvip\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:awsvip):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n\u00a0\u00a0\u00a0\u00a0 pgsql-master-ip\u00a0\u00a0\u00a0 (ocf::heartbeat:IPaddr2):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-143<\/pre>\n<p>&nbsp;<\/p>\n<p>Le n\u0153ud ip-172-44-2-226 est not\u00e9 OFFLINE, mais nous r\u00e9cup\u00e9rons aussit\u00f4t l&#8217;instance primaire PostgreSQL sur ip-172-44-2-143.<\/p>\n<p>Lorsque le n\u0153ud ip-172-44-2-226 a fini de red\u00e9marrer compl\u00e8tement, l&#8217;instance PostgreSQL passe en mode slave sur celui ci.<\/p>\n<p>&nbsp;<\/p>\n<pre>[root@ip-172-44-2-143:0 ~]# pcs status --full\r\n\u2026\r\nOnline: [ ip-172-44-2-143 (2) ip-172-44-2-226 (1) ]\r\n\r\nFull list of resources:\r\n\r\n\r\n Master\/Slave Set: pgsql-ha [pgsqld]\r\n \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0 \u00a0\u00a0(ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Slave ip-172-44-2-226\r\n<strong> \u00a0\u00a0\u00a0 pgsqld\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:pgsqlms):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Master ip-172-44-2-143<\/strong>\r\n \u00a0\u00a0\u00a0 Masters: [ ip-172-44-2-143 ]\r\n \u00a0\u00a0\u00a0 Slaves: [ ip-172-44-2-226 ]\r\n clusterfence\u00a0\u00a0 (stonith:fence_aws):\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n Resource Group: aws-group\r\n \u00a0\u00a0\u00a0 pgsql_awsvip\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 (ocf::heartbeat:awsvip):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-143\r\n\u00a0\u00a0\u00a0\u00a0 pgsql-master-ip\u00a0\u00a0\u00a0 (ocf::heartbeat:IPaddr2):\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 Started ip-172-44-2-143<\/pre>\n<p>&nbsp;<\/p>\n<p>Une fois de plus, l&#8217;historique de fencing indique que tout s&#8217;est bien d\u00e9roul\u00e9<\/p>\n<pre>Fencing History:\r\n* reboot of ip-172-44-2-226 successful: delegate=ip-172-44-2-143, client=stonith_admin.2316, origin=ip-172-44-2-143,\r\n\u00a0\u00a0\u00a0 completed='Fri Sep\u00a0 4 13:10:37 2020'<\/pre>\n<p>&nbsp;<\/p>\n<h2>Conclusion<\/h2>\n<p>&nbsp;<\/p>\n<p>Gr\u00e2ce au package &#8220;fence-agents-aws&#8221; disponible dans les d\u00e9p\u00f4ts Red Hat CentOS, nous disposons d&#8217;une solution de fencing efficace dans le cadre de l&#8217;utilisation d&#8217;un cluster PostgreSQL HD sur des VM Amazon.<br \/>\nLa VIP, au sein de AWS, consiste alors \u00e0 fournir une seconde adresse IP au noeud portant l&#8217;instance primaire.<\/p>\n<p>&nbsp;<\/p>\n<p>Il est \u00e9galement possible, gr\u00e2ce \u00e0 ce package, de travailler sur une Elastic IP :<\/p>\n<pre># pcs resource describe awseip\r\nAssumed agent name 'ocf:heartbeat:awseip' (deduced from 'awseip')\r\nocf:heartbeat:awseip - Amazon AWS Elastic IP Address Resource Agent\r\n\r\nResource Agent for Amazon AWS Elastic IP Addresses.\r\n\r\nIt manages AWS Elastic IP Addresses with awscli.<\/pre>\n<p>&nbsp;<\/p>\n<p>Cette Elastic IP pourra alors basculer d&#8217;une VM EC2 \u00e0 une autre dans le cadre du cluster.<br \/>\nCeci peut \u00eatre utilis\u00e9 pour un site Web ou un serveur d&#8217;application avec haute disponibilit\u00e9 sur des VM Amazon n\u00e9cessitant une IP Publique.<\/p>\n<p>&nbsp;<\/p>\n<p>@ bient\u00f4t<\/p>\n<p>&nbsp;<\/p>\n<p>Emmanuel RAMI<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8293&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8293&#038;title=AWS%20%3A%20Configurer%20un%20cluster%20PostgreSQL%20HD%20avec%20Corosync%2FPacemaker%20sur%20des%20EC2%20Amazon\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=AWS%20%3A%20Configurer%20un%20cluster%20PostgreSQL%20HD%20avec%20Corosync%2FPacemaker%20sur%20des%20EC2%20Amazon&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8293\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Hello apr\u00e8s ces quelques mois d&#8217;absence li\u00e9s \u00e0 ces contraintes sanitaires, nous voici de retour pour un nouveau sujet AWS PostgreSQL. Nous avons r\u00e9cemment mis en place un cluster Haute Disponibilit\u00e9 chez un de nos clients en utilisant la solution&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":32,"featured_media":8288,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[295,266],"tags":[],"class_list":["post-8293","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aws","category-postgresql"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Hello apr\u00e8s ces quelques mois d&#8217;absence li\u00e9s \u00e0 ces contraintes sanitaires, nous voici de retour pour un nouveau sujet AWS PostgreSQL. Nous avons r\u00e9cemment mis en place un cluster Haute Disponibilit\u00e9 chez un de nos clients en utilisant la solution&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2020-09-04T16:05:36+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/heartbeat.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"374\" \/>\n\t<meta property=\"og:image:height\" content=\"191\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Emmanuel RAMI\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emmanuel RAMI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"19 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\"},\"author\":{\"name\":\"Emmanuel RAMI\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\"},\"headline\":\"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon\",\"datePublished\":\"2020-09-04T16:05:36+00:00\",\"dateModified\":\"2020-09-04T16:05:36+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\"},\"wordCount\":2120,\"commentCount\":2,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"articleSection\":[\"AWS\",\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\",\"name\":\"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2020-09-04T16:05:36+00:00\",\"dateModified\":\"2020-09-04T16:05:36+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\",\"name\":\"Emmanuel RAMI\",\"sameAs\":[\"https:\/\/blog.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/","og_locale":"fr_FR","og_type":"article","og_title":"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon - Capdata TECH BLOG","og_description":"Hello apr\u00e8s ces quelques mois d&#8217;absence li\u00e9s \u00e0 ces contraintes sanitaires, nous voici de retour pour un nouveau sujet AWS PostgreSQL. Nous avons r\u00e9cemment mis en place un cluster Haute Disponibilit\u00e9 chez un de nos clients en utilisant la solution&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2020-09-04T16:05:36+00:00","og_image":[{"width":374,"height":191,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/09\/heartbeat.jpg","type":"image\/jpeg"}],"author":"Emmanuel RAMI","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Emmanuel RAMI","Dur\u00e9e de lecture estim\u00e9e":"19 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/"},"author":{"name":"Emmanuel RAMI","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae"},"headline":"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon","datePublished":"2020-09-04T16:05:36+00:00","dateModified":"2020-09-04T16:05:36+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/"},"wordCount":2120,"commentCount":2,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"articleSection":["AWS","PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/","url":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/","name":"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2020-09-04T16:05:36+00:00","dateModified":"2020-09-04T16:05:36+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/aws-configurer-un-cluster-postgresql-hd-avec-corosync-pacemaker-sur-des-ec2-amazon\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"AWS : Configurer un cluster PostgreSQL HD avec Corosync\/Pacemaker sur des EC2 Amazon"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae","name":"Emmanuel RAMI","sameAs":["https:\/\/blog.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8293","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=8293"}],"version-history":[{"count":24,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8293\/revisions"}],"predecessor-version":[{"id":8319,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8293\/revisions\/8319"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/8288"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=8293"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=8293"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=8293"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}