{"id":10633,"date":"2024-12-19T11:28:41","date_gmt":"2024-12-19T10:28:41","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=10633"},"modified":"2024-12-19T11:28:41","modified_gmt":"2024-12-19T10:28:41","slug":"la-montee-de-version-en-zero-downtime-merci-la-replication","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/","title":{"rendered":"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication !"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10633&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10633&#038;title=La%20mont%C3%A9e%20de%20version%20en%20zero-downtime%20%3A%20merci%20la%20r%C3%A9plication%20%21\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=La%20mont%C3%A9e%20de%20version%20en%20zero-downtime%20%3A%20merci%20la%20r%C3%A9plication%20%21&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10633\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><h1>Introduction :<\/h1>\n<p>Dans le monde des bases de donn\u00e9es, garantir une disponibilit\u00e9 continue est une exigence incontournable, surtout pour les syst\u00e8mes critiques o\u00f9 chaque minute d&#8217;arr\u00eat peut entra\u00eener des pertes significatives. Lorsqu\u2019il s\u2019agit de migrer une base de donn\u00e9es vers une nouvelle version, ce d\u00e9fi prend une toute autre dimension. Comment mettre \u00e0 jour votre syst\u00e8me sans interrompre les services, tout en pr\u00e9servant l\u2019int\u00e9grit\u00e9 des donn\u00e9es ?<\/p>\n<p>PostgreSQL offre une solution \u00e9l\u00e9gante : la r\u00e9plication logique. Cet outil permet de transf\u00e9rer des donn\u00e9es de mani\u00e8re fluide entre diff\u00e9rentes versions de PostgreSQL, tout en maintenant la base de donn\u00e9es source op\u00e9rationnelle. Dans cet article, nous allons explorer \u00e9tape par \u00e9tape comment utiliser cette fonctionnalit\u00e9 pour r\u00e9aliser une mont\u00e9e de version sans temps d&#8217;arr\u00eat, du d\u00e9ploiement initial \u00e0 la bascule finale vers la nouvelle version.<\/p>\n<p>Que vous soyez en train de planifier une migration ou simplement curieux de d\u00e9couvrir les possibilit\u00e9s offertes par PostgreSQL, suivez ce guide pratique qui vous permettra de transformer un d\u00e9fi complexe en une op\u00e9ration ma\u00eetris\u00e9e et efficace.<\/p>\n<h1>Le test :<\/h1>\n<ol>\n<li>\n<h3>Pr\u00e9paration<\/h3>\n<\/li>\n<\/ol>\n<p>Pour tester cette nouvelle m\u00e9thode, nous aurons besoin de deux instances PostgreSQL. Pour cet article j&#8217;ai choisit de d\u00e9montrer la technique en migrant d&#8217;une version 14 \u00e0 une version 17 de PostgreSQL.<\/p>\n<p>Je commence donc par installer les versions sur deux machines diff\u00e9rentes pouvant communiquer entre elles (c&#8217;est important) :<\/p>\n<p>Sur les deux machines nous pouvons ex\u00e9cuter les commandes suivantes :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nroot@ip-192-1-1-246:~# sudo apt update sudo apt upgrade -y\r\n\r\n...\r\n\r\nroot@ip-192-1-1-246:~# sudo apt -y install gnupg2 wget vim\r\n\r\n...\r\n\r\nroot@ip-192-1-1-246:~# sudo sh -c 'echo &quot;deb http:\/\/apt.postgresql.org\/pub\/repos\/apt $(lsb_release -cs)-pgdg main&quot; &amp;gt; \/etc\/apt\/sources.list.d\/pgdg.list'\r\nroot@ip-192-1-1-246:~# curl -fsSL https:\/\/www.postgresql.org\/media\/keys\/ACCC4CF8.asc|sudo gpg --dearmor -o \/etc\/apt\/trusted.gpg.d\/postgresql.gpg\r\n\r\nroot@ip-192-1-1-246:~# sudo apt -y update\r\nGet:1 file:\/etc\/apt\/mirrors\/debian.list Mirrorlist [38 B]\r\nGet:2 file:\/etc\/apt\/mirrors\/debian-security.list Mirrorlist [47 B]\r\nHit:3 https:\/\/cdn-aws.deb.debian.org\/debian bookworm InRelease\r\nHit:4 https:\/\/cdn-aws.deb.debian.org\/debian bookworm-updates InRelease\r\nHit:5 https:\/\/cdn-aws.deb.debian.org\/debian bookworm-backports InRelease\r\nHit:6 https:\/\/cdn-aws.deb.debian.org\/debian-security bookworm-security InRelease\r\nGet:7 http:\/\/apt.postgresql.org\/pub\/repos\/apt bookworm-pgdg InRelease [129 kB]\r\nGet:8 http:\/\/apt.postgresql.org\/pub\/repos\/apt bookworm-pgdg\/main amd64 Packages [359 kB]\r\nFetched 489 kB in 1s (348 kB\/s)\r\nReading package lists... Done\r\nBuilding dependency tree... Done\r\nReading state information... Done\r\nAll packages are up to date.\r\n<\/pre>\n<p>Puis sur notre premi\u00e8re machine :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nroot@ip-192-1-1-246:~# sudo apt install postgresql-14\r\nReading package lists... Done\r\nBuilding dependency tree... Done\r\nReading state information... Done\r\nThe following additional packages will be installed:\r\nlibcommon-sense-perl libgdbm-compat4 libio-pty-perl libipc-run-perl\r\nlibjson-perl libjson-xs-perl libllvm16 libperl5.36 libpq5 libsensors-config\r\nlibsensors5 libtypes-serialiser-perl libxslt1.1 libz3-4 logrotate perl\r\nperl-modules-5.36 postgresql-client-14 postgresql-client-common\r\npostgresql-common ssl-cert sysstat\r\n\r\n...\r\n\r\nroot@ip-192-1-1-246:~# systemctl status postgresql@14-main.service\r\n\u25cf postgresql@14-main.service - PostgreSQL Cluster 14-main\r\nLoaded: loaded (\/lib\/systemd\/system\/postgresql@.service; enabled-runtime;&amp;gt;\r\nActive: active (running) since Wed 2024-12-04 09:43:55 UTC; 2min 55s ago\r\nProcess: 15248 ExecStart=\/usr\/bin\/pg_ctlcluster --skip-systemctl-redirect &amp;gt;\r\nMain PID: 15253 (postgres)\r\nTasks: 7 (limit: 4633)\r\nMemory: 17.3M\r\nCPU: 239ms\r\nCGroup: \/system.slice\/system-postgresql.slice\/postgresql@14-main.service\r\n\u251c\u250015253 \/usr\/lib\/postgresql\/14\/bin\/postgres -D \/var\/lib\/postgresq&amp;gt;\r\n\u251c\u250015255 &quot;postgres: 14\/main: checkpointer &quot;\r\n\u251c\u250015256 &quot;postgres: 14\/main: background writer &quot;\r\n\u251c\u250015257 &quot;postgres: 14\/main: walwriter &quot;\r\n\u251c\u250015258 &quot;postgres: 14\/main: autovacuum launcher &quot;\r\n\u251c\u250015259 &quot;postgres: 14\/main: stats collector &quot;\r\n\u2514\u250015260 &quot;postgres: 14\/main: logical replication launcher &quot;\r\n\r\nDec 04 09:43:53 ip-192-1-1-246 systemd[1]: Starting postgresql@14-main.service&amp;gt;\r\nDec 04 09:43:55 ip-192-1-1-246 systemd[1]: Started postgresql@14-main.service &amp;gt;\r\n<\/pre>\n<p>Puis sur la deuxi\u00e8me machine :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nadmin@ip-192-1-1-89:~$ sudo apt install postgresql-17\r\nReading package lists... Done\r\nBuilding dependency tree... Done\r\nReading state information... Done\r\nThe following additional packages will be installed:\r\nlibcommon-sense-perl libgdbm-compat4 libio-pty-perl libipc-run-perl\r\nlibjson-perl libjson-xs-perl libllvm16 libperl5.36 libpq5 libsensors-config\r\nlibsensors5 libtypes-serialiser-perl libxslt1.1 libz3-4 logrotate perl\r\nperl-modules-5.36 postgresql-client-17 postgresql-client-common\r\npostgresql-common ssl-cert sysstat\r\n\r\nadmin@ip-192-1-1-89:~$ systemctl status postgresql@17-main.service\r\n\u25cf postgresql@17-main.service - PostgreSQL Cluster 17-main\r\nLoaded: loaded (\/lib\/systemd\/system\/postgresql@.service; enabled-runtime; &amp;gt;\r\nActive: active (running) since Wed 2024-12-04 09:52:33 UTC; 2min 13s ago\r\nProcess: 15235 ExecStart=\/usr\/bin\/pg_ctlcluster --skip-systemctl-redirect 1&amp;gt;\r\nMain PID: 15240 (postgres)\r\nTasks: 6 (limit: 4633)\r\nMemory: 20.5M\r\nCPU: 332ms\r\nCGroup: \/system.slice\/system-postgresql.slice\/postgresql@17-main.service\r\n\u251c\u250015240 \/usr\/lib\/postgresql\/17\/bin\/postgres -D \/var\/lib\/postgresql&amp;gt;\r\n\u251c\u250015241 &quot;postgres: 17\/main: checkpointer &quot;\r\n\u251c\u250015242 &quot;postgres: 17\/main: background writer &quot;\r\n\u251c\u250015244 &quot;postgres: 17\/main: walwriter &quot;\r\n\u251c\u250015245 &quot;postgres: 17\/main: autovacuum launcher &quot;\r\n\u2514\u250015246 &quot;postgres: 17\/main: logical replication launcher &quot;\r\n\r\nDec 04 09:52:31 ip-192-1-1-89 systemd[1]: Starting postgresql@17-main.service -&amp;gt;\r\nDec 04 09:52:33 ip-192-1-1-89 systemd[1]: Started postgresql@17-main.service -&amp;gt;\r\n<\/pre>\n<p>Nos deux instances sont maintenant install\u00e9es. Sur notre premi\u00e8re base de donn\u00e9es, nous allons cr\u00e9er une base, avec deux tables, et quelques lignes.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-246:\/etc\/postgresql\/14\/main$ psql\r\npsql (14.15 (Debian 14.15-1.pgdg120+1))\r\nType &quot;help&quot; for help.\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=# CREATE DATABASE mydb;\r\nCREATE DATABASE\r\npostgres=# \\c mydb\r\nYou are now connected to database &quot;mydb&quot; as user &quot;postgres&quot;.\r\nmydb=# CREATE TABLE customers (\r\nid SERIAL PRIMARY KEY,\r\nname TEXT NOT NULL,\r\nemail TEXT UNIQUE,\r\ncreated_at TIMESTAMP DEFAULT NOW()\r\n);\r\nCREATE TABLE\r\nmydb=# CREATE TABLE orders (\r\nid SERIAL PRIMARY KEY,\r\ncustomer_id INT REFERENCES customers(id),\r\namount NUMERIC(10,2) NOT NULL,\r\norder_date TIMESTAMP DEFAULT NOW()\r\n);\r\nCREATE TABLE\r\nmydb=# INSERT INTO customers (name, email) VALUES\r\n('Alice', 'alice@example.com'),\r\n('Bob', 'bob@example.com'),\r\n('Charlie', 'charlie@example.com');\r\nINSERT 0 3\r\nmydb=# INSERT INTO orders (customer_id, amount) VALUES\r\n(1, 50.75),\r\n(2, 20.00),\r\n(1, 75.00);\r\nINSERT 0 3\r\n<\/pre>\n<h3>2. Configurer la base de donn\u00e9es source<\/h3>\n<p>Sur notre premi\u00e8re machine, nous allons modifier les param\u00e8tres du fichier de configuration de PostgreSQL pour permettre de pouvoir cr\u00e9er la r\u00e9plication :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nroot@ip-192-1-1-246:~# su - postgres\r\npostgres@ip-192-1-1-246:~$ cd \/etc\/postgresql\/14\/main\r\npostgres@ip-192-1-1-246:\/etc\/postgresql\/14\/main$ vi postgresql.conf\r\n<\/pre>\n<p>Il s&#8217;agit de modifier les param\u00e8tres suivants :<\/p>\n<blockquote><p>wal_level = logical<br \/>\nmax_replication_slots = 4<br \/>\nmax_wal_senders = 4<\/p><\/blockquote>\n<p>Nous modifierons ensuite le pg_hba pour rajouter l&#8217;autorisation de connexion entre les deux machines :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-246:\/etc\/postgresql\/14\/main$ vi pg_hba.conf\r\n<\/pre>\n<p>Il suffira de rajouter une ligne :<\/p>\n<blockquote><p>host replication <span class=\"hljs-attribute\">all<\/span> &lt;destination_ip&gt; scram-sha-256<\/p>\n<p>host replication all &lt;source_ip&gt; scram-sha-256<\/p>\n<p>host all replication &lt;destination_ip&gt; scram-sha-256<\/p>\n<p>host all replication &lt;source-ip&gt; scram-sha-256<\/p><\/blockquote>\n<p>Il ne faut pas oublier de red\u00e9marrer le serveur PostgreSQL une fois ces modifications effectu\u00e9es :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nroot@ip-192-1-1-246:~# systemctl stop postgresql@14-main.service\r\nroot@ip-192-1-1-246:~# systemctl start postgresql@14-main.service\r\n<\/pre>\n<h3>3. Configurer la base de donn\u00e9e de destination<\/h3>\n<p>Apr\u00e8s avoir configur\u00e9 notre base de donn\u00e9e depuis laquelle nous allons faire notre migration, il nous faut a pr\u00e9sent configurer celle qui va recevoir la nouvelle base de donn\u00e9e migr\u00e9e.<\/p>\n<p>Pour cela, nous allons r\u00e9p\u00e9ter les \u00e9tapes de configuration de la base de donn\u00e9e source, en les adaptant sur notre base de donn\u00e9e de destination : modifier le postgresql.conf, puis le pg_hba.conf, red\u00e9marrer ensuite la base de donn\u00e9es<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-89:~$ cd \/etc\/postgresql\/17\/main\/\r\npostgres@ip-192-1-1-89:\/etc\/postgresql\/17\/main$ vi postgresql.conf\r\n<\/pre>\n<blockquote><p>wal_level = logical<br \/>\nmax_replication_slots = 4<br \/>\nmax_wal_senders = 4<\/p><\/blockquote>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-246:\/etc\/postgresql\/14\/main$ vi pg_hba.conf\r\n<\/pre>\n<blockquote><p>host replication <span class=\"hljs-attribute\">all<\/span> &lt;destination_ip&gt; scram-sha-256<\/p>\n<p>host replication all &lt;source_ip&gt; scram-sha-256<\/p>\n<p>host all replication &lt;destination_ip&gt; scram-sha-256<\/p>\n<p>host all replication &lt;source-ip&gt; scram-sha-256<\/p><\/blockquote>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nroot@ip-192-1-1-246:~# systemctl stop postgresql@14-main.service\r\nroot@ip-192-1-1-246:~# systemctl start postgresql@14-main.service\r\n<\/pre>\n<p>Il ne faudra pas oublier de cr\u00e9er la base de donn\u00e9e ainsi que toutes les structures de tables et autres objets dans notre base cible pour qu&#8217;elle puisse recevoir les donn\u00e9es. Pour avoir les scripts de cr\u00e9ation de la base de donn\u00e9es, vous pouvez faire un pg_dump avec l&#8217;option<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-89:~$ psql\r\npsql (17.2 (Debian 17.2-1.pgdg120+1))\r\nType &quot;help&quot; for help.\r\n<\/pre>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=# CREATE DATABASE mydb;\r\nCREATE DATABASE\r\n<\/pre>\n<p>N&#8217;oubliez pas de donner tout les droits \u00e0 votre utilisateur de replication pour qu&#8217;il puisse lire, \u00e9crire&#8230; Sur votre base de donn\u00e9es repliqu\u00e9e, sur la source, comme sur la destination :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=# GRANT ALL PRIVILEGES ON DATABASE &quot;mydb&quot; to replication;\r\nGRANT\r\n\r\nmydb=# GRANT ALL PRIVILEGES ON all tables in schema public to replication;\r\nGRANT\r\n<\/pre>\n<h3>4. Mise en place de la r\u00e9plication logique<\/h3>\n<p>Maintenant que nos deux environnement sont bien en place, nous sommes pr\u00eats \u00e0 mettre en route le processus de r\u00e9plication logique pour commencer \u00e0 transf\u00e9rer les donn\u00e9es. Les \u00e9tapes du dessous ont demand\u00e9 une premi\u00e8re intervention hors horaire de prod, notamment pour red\u00e9marrer le service postgreSQL, mais le but d&#8217;une migration avec r\u00e9plication logique, c&#8217;est de pouvoir ensuite n&#8217;avoir rien \u00e0 toucher jusqu&#8217;au moment de basculer les applicatifs d&#8217;une ip a une autre.<\/p>\n<p>Sur notre machine source, on cr\u00e9\u00e9 la publication qui va nous servir \u00e0 transf\u00e9rer nos tables :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-246:~$ psql\r\npsql (14.15 (Debian 14.15-1.pgdg120+1))\r\nType &quot;help&quot; for help.\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=# \\c mydb\r\nYou are now connected to database &quot;mydb&quot; as user &quot;postgres&quot;.\r\nmydb=# CREATE PUBLICATION my_pub FOR ALL TABLES;\r\nCREATE PUBLICATION\r\n<\/pre>\n<p>On va ensuite cr\u00e9\u00e9 la souscription sur la base de donn\u00e9es cible de notre migration :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nmydb=# create subscription my_sub connection 'host=192.1.1.246 port=5432 dbname=mydb user=replication password=replication'publication my_pub;\r\nNOTICE: created replication slot &quot;my_sub&quot; on publisher\r\nCREATE SUBSCRIPTION\r\n<\/pre>\n<p>Maintenant que la subscription est en place, on peut v\u00e9rifier qu&#8217;elle fonctionne. Pendant ce temps, la vrai production, sur la version 14, peut continuer \u00e0 fonctionner, elle sera automatiquement repliqu\u00e9e sur la nouvelle version 17.<\/p>\n<p>On peut v\u00e9rifier ou en est notre replication avec la commande <span class=\"hljs-keyword\">SELECT<\/span> <span class=\"hljs-operator\">*<\/span> <span class=\"hljs-keyword\">FROM<\/span> pg_stat_subscription;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nmydb=# SELECT * FROM pg_stat_subscription;\r\n-[ RECORD 1 ]---------+------------------------------\r\nsubid | 16422\r\nsubname | my_sub\r\nworker_type | apply\r\npid | 16076\r\nleader_pid |\r\nrelid |\r\nreceived_lsn | 0\/1733988\r\nlast_msg_send_time | 2024-12-04 14:23:59.873074+00\r\nlast_msg_receipt_time | 2024-12-04 14:23:59.872357+00\r\nlatest_end_lsn | 0\/1733988\r\nlatest_end_time | 2024-12-04 14:23:59.873074+00\r\n<\/pre>\n<h3>5. Test de replication, bascule, et nettoyage<\/h3>\n<p>Une fois que la synchronisation de votre replication logique est termin\u00e9e, ce qui peut prendre un certain temps si vous avez beaucoup de donn\u00e9es, vous pouvez constater de vous m\u00eame sur les lignes que vous ajoutez, modifiez ou supprimez sur votre instance source sont repliqu\u00e9es sur l&#8217;instance de destination.<\/p>\n<p>Par exemple, ajoutons un nouveau customer sur notre base source :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@ip-192-1-1-246:~$ psql\r\npsql (14.15 (Debian 14.15-1.pgdg120+1))\r\nType &quot;help&quot; for help.\r\n<\/pre>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=# \\c mydb\r\nYou are now connected to database &quot;mydb&quot; as user &quot;postgres&quot;.\r\nmydb=# INSERT INTO customers (name, email) VALUES ('Diana', 'diana@example.com');\r\nINSERT 0 1\r\n<\/pre>\n<p>Si nous allons requ\u00eater sur notre instance de destination :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nmydb=# select * from customers where name='Diana';\r\nid | name | email | created_at\r\n----+-------+-------------------+----------------------------\r\n4 | Diana | diana@example.com | 2024-12-04 14:31:05.708031\r\n(1 row)\r\n<\/pre>\n<p>Quand vous vous \u00eates bien assur\u00e9 que tout fonctionne, vous pouvez alors rediriger les drivers odbc de vos applications vers le nouveau serveur et non plus l&#8217;ancien.<\/p>\n<p>Une fois que cela est fait, vous pouvez alors supprimer le lien de replication, puisque l&#8217;ancienne instance ne sera plus aliment\u00e9e, et m\u00eame supprimer l&#8217;ancienne version si vous n&#8217;en avez plus l&#8217;utilit\u00e9.<\/p>\n<p>Sur la destination, notre nouveau serveur de prod :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nDROP SUBSCRIPTION my_sub;\r\n<\/pre>\n<p>Sur la source, ancien serveur qui va \u00eatre supprim\u00e9 :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\nDROP PUBLICATION my_pub;\r\n<\/pre>\n<h1>Conclusion<\/h1>\n<p>La r\u00e9plication logique se distingue comme l\u2019une des meilleures solutions pour minimiser le temps d\u2019arr\u00eat lors d\u2019une migration de version PostgreSQL. En permettant une synchronisation continue des donn\u00e9es entre deux instances, elle garantit une transition en douceur sans jamais interrompre les services en cours. Cela en fait un choix id\u00e9al pour les environnements critiques o\u00f9 la disponibilit\u00e9 est primordiale.<\/p>\n<h3>Avantages :<\/h3>\n<p><strong>Z\u00e9ro downtime :<\/strong> la source reste op\u00e9rationnelle pendant toute la migration.<br \/>\n<strong>Flexibilit\u00e9 :<\/strong> possibilit\u00e9 de migrer vers une infrastructure diff\u00e9rente (nouveau mat\u00e9riel, cloud, etc.).<br \/>\n<strong>Granularit\u00e9 :<\/strong> la r\u00e9plication logique peut se limiter \u00e0 certaines tables si n\u00e9cessaire.<\/p>\n<h3>Inconv\u00e9nients :<\/h3>\n<p><strong>Complexit\u00e9 initiale :<\/strong> la configuration et les tests n\u00e9cessitent une bonne ma\u00eetrise des param\u00e8tres de PostgreSQL.<br \/>\n<strong>Impact sur les performances :<\/strong> la charge de r\u00e9plication peut l\u00e9g\u00e8rement affecter les performances de la base source, surtout avec un grand volume de donn\u00e9es.<br \/>\n<strong>Non pris en charge pour certains types de donn\u00e9es :<\/strong> les types sp\u00e9cifiques ou les extensions non standards ne sont pas toujours compatibles avec la r\u00e9plication logique.<\/p>\n<p>Si la r\u00e9plication logique est souvent la m\u00e9thode privil\u00e9gi\u00e9e pour des mises \u00e0 jour critiques, elle n\u2019est pas la seule option. Des alternatives comme les outils de sauvegarde et restauration ou la r\u00e9plication physique peuvent r\u00e9pondre \u00e0 d\u2019autres besoins sp\u00e9cifiques, notamment pour des bases de donn\u00e9es tr\u00e8s volumineuses ou des sc\u00e9narios n\u00e9cessitant une r\u00e9plication compl\u00e8te du syst\u00e8me.<\/p>\n<p>Dans tous les cas, le choix de la m\u00e9thode d\u00e9pendra de votre contexte, de vos contraintes techniques et de vos objectifs m\u00e9tier. Prenez le temps d\u2019\u00e9valuer les diff\u00e9rentes options pour garantir une migration r\u00e9ussie et sans surprise.<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10633&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10633&#038;title=La%20mont%C3%A9e%20de%20version%20en%20zero-downtime%20%3A%20merci%20la%20r%C3%A9plication%20%21\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=La%20mont%C3%A9e%20de%20version%20en%20zero-downtime%20%3A%20merci%20la%20r%C3%A9plication%20%21&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10633\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Introduction : Dans le monde des bases de donn\u00e9es, garantir une disponibilit\u00e9 continue est une exigence incontournable, surtout pour les syst\u00e8mes critiques o\u00f9 chaque minute d&#8217;arr\u00eat peut entra\u00eener des pertes significatives. Lorsqu\u2019il s\u2019agit de migrer une base de donn\u00e9es vers&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":41,"featured_media":10665,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[266],"tags":[137,431,336],"class_list":["post-10633","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-postgresql","tag-migration","tag-postgresql","tag-replication-logique"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication ! - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication ! - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Introduction : Dans le monde des bases de donn\u00e9es, garantir une disponibilit\u00e9 continue est une exigence incontournable, surtout pour les syst\u00e8mes critiques o\u00f9 chaque minute d&#8217;arr\u00eat peut entra\u00eener des pertes significatives. Lorsqu\u2019il s\u2019agit de migrer une base de donn\u00e9es vers&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2024-12-19T10:28:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2024\/12\/19146475-schema-de-la-migration-de-base-de-donnees.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1300\" \/>\n\t<meta property=\"og:image:height\" content=\"1021\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Sarah FAVEERE\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sarah FAVEERE\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\"},\"author\":{\"name\":\"Sarah FAVEERE\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/686f2452f7ec79115d31e41c230a9da2\"},\"headline\":\"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication !\",\"datePublished\":\"2024-12-19T10:28:41+00:00\",\"dateModified\":\"2024-12-19T10:28:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\"},\"wordCount\":2344,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"keywords\":[\"migration\",\"PostgreSQL\",\"r\u00e9plication logique\"],\"articleSection\":[\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\",\"name\":\"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication ! - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2024-12-19T10:28:41+00:00\",\"dateModified\":\"2024-12-19T10:28:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication !\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/686f2452f7ec79115d31e41c230a9da2\",\"name\":\"Sarah FAVEERE\",\"sameAs\":[\"http:\/\/blog.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/sfaveere\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication ! - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/","og_locale":"fr_FR","og_type":"article","og_title":"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication ! - Capdata TECH BLOG","og_description":"Introduction : Dans le monde des bases de donn\u00e9es, garantir une disponibilit\u00e9 continue est une exigence incontournable, surtout pour les syst\u00e8mes critiques o\u00f9 chaque minute d&#8217;arr\u00eat peut entra\u00eener des pertes significatives. Lorsqu\u2019il s\u2019agit de migrer une base de donn\u00e9es vers&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2024-12-19T10:28:41+00:00","og_image":[{"width":1300,"height":1021,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2024\/12\/19146475-schema-de-la-migration-de-base-de-donnees.jpg","type":"image\/jpeg"}],"author":"Sarah FAVEERE","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Sarah FAVEERE","Dur\u00e9e de lecture estim\u00e9e":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/"},"author":{"name":"Sarah FAVEERE","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/686f2452f7ec79115d31e41c230a9da2"},"headline":"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication !","datePublished":"2024-12-19T10:28:41+00:00","dateModified":"2024-12-19T10:28:41+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/"},"wordCount":2344,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"keywords":["migration","PostgreSQL","r\u00e9plication logique"],"articleSection":["PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/","url":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/","name":"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication ! - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2024-12-19T10:28:41+00:00","dateModified":"2024-12-19T10:28:41+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/la-montee-de-version-en-zero-downtime-merci-la-replication\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"La mont\u00e9e de version en zero-downtime : merci la r\u00e9plication !"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/686f2452f7ec79115d31e41c230a9da2","name":"Sarah FAVEERE","sameAs":["http:\/\/blog.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/sfaveere\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10633","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/41"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=10633"}],"version-history":[{"count":29,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10633\/revisions"}],"predecessor-version":[{"id":10668,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10633\/revisions\/10668"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/10665"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=10633"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=10633"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=10633"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}