{"id":10584,"date":"2024-07-16T12:24:05","date_gmt":"2024-07-16T11:24:05","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=10584"},"modified":"2024-07-17T11:29:01","modified_gmt":"2024-07-17T10:29:01","slug":"postgresql-17-sauvegardes-incrementales","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/","title":{"rendered":"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10584&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10584&#038;title=PostgreSQL%2017%20%3A%20des%20sauvegardes%20incr%C3%A9mentales%20avec%20pg_basebackup\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%2017%20%3A%20des%20sauvegardes%20incr%C3%A9mentales%20avec%20pg_basebackup&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10584\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-10592\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2024\/07\/SalesGrowth.jpg\" alt=\"\" width=\"279\" height=\"180\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>Bonjour<\/p>\n<p>Les 11 et 12 juin derniers, nous \u00e9tions aux journ\u00e9es PGDAY \u00e0 Lille pour d\u00e9couvrir les nouveaut\u00e9s autour de PostgreSQL.<br \/>\nCette conf\u00e9rence regroupe diff\u00e9rents professionnels, de la communaut\u00e9 francophone, qui agissent en contribuant sur des sujets techniques mais aussi sur les bonnes pratiques afin d&#8217;utiliser PostgreSQL dans les meilleurs conditions.<\/p>\n<p>Un article m&#8217;a particuli\u00e8rement int\u00e9ress\u00e9 cette ann\u00e9e, c&#8217;est celui de <a href=\"https:\/\/www.linkedin.com\/in\/stefan-fercot\/?originalSubdomain=be\">Stefan Fercot<\/a> Senior DBA PostgreSQL qui vit en Belgique, et travaille pour une soci\u00e9t\u00e9 allemande experte dans les solutions PostgreSQL. Sa pr\u00e9sentation portait sur le sujet &#8220;d\u00e9mystifier les sauvegardes incr\u00e9mentales sous PostgreSQL&#8221;.<\/p>\n<p>J&#8217;ai \u00e9cout\u00e9 sa conf\u00e9rence tout en ayant h\u00e2te de tester sa mise en place d\u00e8s mon retour de Lille.<\/p>\n<p>Je tiens \u00e0 remercier Stefan pour son travail sur ce sujet sauvegardes PostgreSQL.<\/p>\n<p>&nbsp;<\/p>\n<p>Tout d&#8217;abord, il faut savoir que les sujets sauvegardes incr\u00e9mentales ont \u00e9t\u00e9 d\u00e9j\u00e0 abord\u00e9s avec des outils comme <strong>Barman<\/strong> ou <strong>Pg_BackRest<\/strong>, et que certaines instances PostgreSQL de production sont sauvegard\u00e9es via ces m\u00e9canismes depuis quelques ann\u00e9es maintenant.<\/p>\n<p>Ici, nous parlons de la solution &#8220;backup incremental&#8221; inclu nativement dans le moteur PostgreSQL, et disponible avec l&#8217;outil &#8220;<strong>pg_basebackup<\/strong>&#8220;. C&#8217;est d&#8217;ailleurs ce point que Stefan a soulign\u00e9 durant la journ\u00e9e PGDAY du 11 juin dernier.<\/p>\n<p>Cette nouvelle fonctionnalit\u00e9 fait partie de la version <strong>PostgreSQL 17<\/strong> qui est pour le moment, en version<strong> Beta 2<\/strong>.<br \/>\nCelle ci devrait sortir, comme \u00e0 l&#8217;accoutum\u00e9, au cour de l&#8217;automne prochain.<\/p>\n<p>Preuve que PostgreSQL est en perp\u00e9tuel \u00e9volution, et rejoint la liste des SGBD \u00e9tant capable, comme peuvent le faire Oracle et SQL Server, de proposer nativement des sauvegardes incr\u00e9mentales.<\/p>\n<p>&nbsp;<\/p>\n<h2>Installation de PostgreSQL 17<\/h2>\n<p>&nbsp;<\/p>\n<p>Pour tester cette fonctionnalit\u00e9, nous devons installer la toute derni\u00e8re version de PostgreSQL , la 17 beta 2. Attention, celle ci n&#8217;\u00e9tant pas disponible dans les d\u00e9p\u00f4ts PGDG, nous devons nous charger d&#8217;installer cette version via le site postgresql.org<\/p>\n<p><a href=\"https:\/\/download.postgresql.org\/pub\/repos\/yum\/testing\/17\/redhat\/rhel-8-x86_64\/\">https:\/\/download.postgresql.org\/pub\/repos\/yum\/testing\/17\/redhat\/rhel-8-x86_64\/<\/a><\/p>\n<p>Nous disposons d&#8217;un serveur Linux fork Red Hat 8 (Rocky Linux). Il nous faut donc t\u00e9l\u00e9charger les &#8220;rpm&#8221; li\u00e9s \u00e0 cette version.<\/p>\n<p>Les packages dont nous avons besoin sont les suivants<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># ls -lrt postgresql1* | awk '{print$9}'\r\npostgresql17-contrib-17-beta2_1PGDG.rhel8.x86_64.rpm\r\npostgresql17-17-beta2_1PGDG.rhel8.x86_64.rpm\r\npostgresql17-libs-17-beta2_1PGDG.rhel8.x86_64.rpm\r\npostgresql17-server-17-beta2_1PGDG.rhel8.x86_64.rpm<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous les installons avec le compte <strong>root<\/strong> de notre serveur.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[root@ tmp]# rpm -i postgresql17-libs-17-beta2_1PGDG.rhel8.x86_64.rpm\r\n[root@ tmp]# rpm -i postgresql17-17-beta2_1PGDG.rhel8.x86_64.rpm\r\n[root@ tmp]# rpm -i postgresql17-server-17-beta2_1PGDG.rhel8.x86_64.rpm\r\n[root@ tmp]# rpm -i postgresql17-contrib-17-beta2_1PGDG.rhel8.x86_64.rpm<\/pre>\n<p>&nbsp;<\/p>\n<p>Comme nous sommes sur un environnement &#8220;Red Hat like&#8221;, la cr\u00e9ation d&#8217;une premi\u00e8re instance via &#8220;initdb&#8221; est n\u00e9cessaire.<br \/>\nSurtout, ne pas oublier d&#8217;activer les &#8220;data checksums&#8221; (option -k), nous verrons pourquoi dans la suite de cet article. La suite est \u00e0 faire avec le compte <strong>postgres<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres ~]$ initdb -D \/data\/postgres\/17\/pg_data -k\r\nThe files belonging to this database system will be owned by user &quot;postgres&quot;.\r\nThis user must also own the server process.\r\n\r\nThe database cluster will be initialized with locale &quot;en_US.UTF-8&quot;.\r\nThe default database encoding has accordingly been set to &quot;UTF8&quot;.\r\nThe default text search configuration will be set to &quot;english&quot;.\r\n\r\nData page checksums are enabled.\r\n\r\ncreating directory \/data\/postgres\/17\/pg_data ... ok\r\ncreating subdirectories ... ok\r\nselecting dynamic shared memory implementation ... posix\r\nselecting default &quot;max_connections&quot; ... 100\r\nselecting default &quot;shared_buffers&quot; ... 128MB\r\nselecting default time zone ... UTC\r\ncreating configuration files ... ok\r\nrunning bootstrap script ... ok\r\nperforming post-bootstrap initialization ... ok\r\nsyncing data to disk ... ok\r\n\r\ninitdb: warning: enabling &quot;trust&quot; authentication for local connections\r\ninitdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.\r\n\r\nSuccess. You can now start the database server using:\r\n\r\npg_ctl -D \/data\/postgres\/17\/pg_data -l logfile start<\/pre>\n<p>&nbsp;<\/p>\n<p>D\u00e9marrer cette instance pour s&#8217;assurer que tout fonctionne<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres ~]$ pg_ctl -D \/data\/postgres\/17\/pg_data -l logfile start\r\nwaiting for server to start.... done\r\nserver started<\/pre>\n<p>&nbsp;<\/p>\n<p>Notre version enregistr\u00e9e est bien une Beta 2. Version qui ne doit pas \u00eatre mise sur un environnement de production comme le rappelle le site de la communaut\u00e9 PostgreSQL.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres ~]$ psql\r\n(postgres@[local]:5437) [postgres] &gt; select * from version();\r\nversion\r\n------------------------------------------------------------------------------------------------------------\r\nPostgreSQL 17beta2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22), 64-bit\r\n(1 row)<\/pre>\n<p>&nbsp;<\/p>\n<h3>Upgrade de version<\/h3>\n<p>&nbsp;<\/p>\n<p>Comme nous disposions deja d&#8217;une version PostgreSQL15 sur ce serveur, nous passons par un upgrade via l&#8217;outil &#8220;pg_upgrade&#8221; toujours disponible dans cette nouvelle version.<\/p>\n<p>Lancer pg_upgrade en mode check<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres ~]$ pg_upgrade -b \/usr\/pgsql-15\/bin\/ -B \/usr\/pgsql-17\/bin\/ -c -d \/data\/postgres\/15\/pg_data\/ -D \/data\/postgres\/17\/pg_data\/ -p 5434 -P 5437\r\n.....\r\n.....\r\n\r\n*Clusters are compatible*\r\n&quot;\/usr\/pgsql-17\/bin\/pg_ctl&quot; -w -D &quot;\/data\/postgres\/17\/pg_data&quot; -o &quot;&quot; -m smart stop  &quot;\/data\/postgres\/17\/pg_data\/pg_upgrade_output.d\/20240708T085906.955\/log\/pg_upgrade_server.log&quot; <\/pre>\n<p>la log est g\u00e9n\u00e9r\u00e9e dans le $PGDATA de la version 17.<\/p>\n<p>Puis lancer l&#8217;ex\u00e9cution de pg_upgrade<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres ~]$ pg_upgrade -b \/usr\/pgsql-15\/bin\/ -B \/usr\/pgsql-17\/bin\/ -d \/data\/postgres\/15\/pg_data\/ -D \/data\/postgres\/17\/pg_data\/ -p 5434 -P 5437<\/pre>\n<p>&nbsp;<\/p>\n<h2>Effectuer une sauvegarde<\/h2>\n<p>&nbsp;<\/p>\n<h3>Pr\u00e9requis<\/h3>\n<p>Avant de pouvoir effectuer une premi\u00e8re sauvegarde avec l&#8217;outil &#8220;<strong>pg_basebackup<\/strong>&#8221; natif, il est primordial de respecter certains pr\u00e9requis important.<\/p>\n<ul>\n<li>L&#8217;instance PostgreSQL doit \u00eatre cr\u00e9\u00e9e avec les &#8216;data checksums&#8217; activ\u00e9s. Si ce n&#8217;est pas le cas, utiliser l&#8217;outil &#8220;<strong>pg_checksums<\/strong>&#8221; avec l&#8217;option &#8220;<strong>-e<\/strong>&#8220;.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li>Si vous lancez une sauvegarde full puis une incr\u00e9mentale imm\u00e9diatement, vous avez toutes les chances de tomber sur cette erreur<\/li>\n<\/ul>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">pg_basebackup: error: could not initiate base backup: ERROR: incremental backups cannot be taken unless WAL summarization is enabled<\/pre>\n<p>En effet, pour avoir toutes les informations concernant les blocks modifi\u00e9s, PostgreSQL a besoin de tracer dans les WALs toutes les modifications sur les objets en base.<br \/>\nPour les DBA Oracle, le &#8220;block change tracking&#8221; de la version Enterprise Edition vous parlera tr\u00e8s certainement&#8230;.<br \/>\nIl s&#8217;agit ici de la m\u00eame fonctionnalit\u00e9, c&#8217;est \u00e0 dire, tracer les modifications effectu\u00e9es dans les blocks de donn\u00e9es.<br \/>\nCette option est le &#8220;<strong>summarize_wal<\/strong>&#8220;.<\/p>\n<p>Pour activer l&#8217;option, nous aurons 2 param\u00e8tres \u00e0 modifier, soit via un ALTER SYSTEM directement sous psql, ou bien dans le fichier &#8220;postgresql.conf&#8221;.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres backup]$ vi $PGDATA\/postgresql.conf\r\n...\r\n\r\n# - WAL Summarization -\r\n\r\n#summarize_wal = off # run WAL summarizer process?\r\n#wal_summary_keep_time = '10d' # when to remove old summary files, 0 = never<\/pre>\n<p>Le premier param\u00e8tre permet d&#8217;activer cette option.<br \/>\nLe second d\u00e9finit un temps de conservation des informations concernant les blocks modifi\u00e9s entre une sauvegarde FULL et un incr\u00e9mentale.<\/p>\n<p>Nous activons donc l&#8217;option &#8220;<strong>summarize_wal<\/strong>&#8221; et la passons \u00e0 <strong>ON<\/strong> et laissons \u00e0 10 jours le &#8220;<strong>wal_summary_keep_time<\/strong>&#8220;.<\/p>\n<p>Attention, activez ces deux param\u00e8tres avant votre premi\u00e8re sauvegarde FULL. Si vous le faites apr\u00e8s, vous risquez de rencontrer l&#8217;erreur suivante<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">pg_basebackup: error: could not initiate base backup: ERROR: WAL summaries are required on timeline 1 from 1\/AA000028 to 1\/AC000060, but the summaries for that timeline and LSN range are incomplete\r\nDETAIL: The first unsummarized LSN in this range is 1\/AA000028.<\/pre>\n<p>Le LSN pris lors de la premi\u00e8re sauvegarde FULL n&#8217;est pas reconnu, et donc la sauvegarde incr\u00e9mentale ne peut s&#8217;appuyer dessus.<\/p>\n<p>&nbsp;<\/p>\n<p>Red\u00e9marrer l&#8217;instance une fois les modifications effectu\u00e9es<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres ~]$ pg_ctl -D \/data\/postgres\/17\/pg_data\/ restart<\/pre>\n<p>&nbsp;<\/p>\n<h3>Lancer une sauvegarde FULL<\/h3>\n<p>&nbsp;<\/p>\n<p>Voici la nouvelle option pr\u00e9sente pour l&#8217;outil &#8220;<strong>pg_basebackup<\/strong>&#8220;<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres -]$ pg_basebackup --help\r\npg_basebackup takes a base backup of a running PostgreSQL server.\r\n\r\nUsage:\r\npg_basebackup [OPTION]...\r\n\r\nOptions controlling the output:\r\n-D, --pgdata=DIRECTORY receive base backup into directory\r\n-F, --format=p|t output format (plain (default), tar)\r\n-i, --incremental=OLDMANIFEST\r\ntake incremental backup\r\n-r, --max-rate=RATE maximum transfer rate to transfer data directory\r\n(in kB\/s, or use suffix &quot;k&quot; or &quot;M&quot;)\r\n\r\n.... <\/pre>\n<p>&nbsp;<\/p>\n<p>Depuis la <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\">version 13<\/a> de PostgreSQL, nous disposons pour chaque sauvegarde, d&#8217;un fichier nomm\u00e9 &#8220;backup_manifest&#8221;. Il s&#8217;agit d&#8217;un fichier json qui recense enti\u00e8rement les objets bases de donn\u00e9es sauvegard\u00e9s avec leur emplacement, leur taille, leur date de modification et leur &#8220;checksum&#8221;.<\/p>\n<p>Celui ci est essentiel pour v\u00e9rifier l&#8217;int\u00e9grit\u00e9 de notre sauvegarde avec &#8220;<strong>pg_verifybackup<\/strong>&#8220;.<\/p>\n<p>Nous pouvons \u00e0 pr\u00e9sent faire une premi\u00e8re sauvegarde FULL de notre instance PG17.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres -]$ pg_basebackup -D \/data\/postgres\/backup\/pg_basebackup\/PG17 -F p -l &quot;Full Backup PG17&quot; -P -v\r\npg_basebackup: initiating base backup, waiting for checkpoint to complete\r\npg_basebackup: checkpoint completed\r\npg_basebackup: write-ahead log start point: 1\/AD000028 on timeline 1\r\npg_basebackup: starting background WAL receiver\r\npg_basebackup: created temporary replication slot &quot;pg_basebackup_8048&quot;\r\n3097788\/3097788 kB (100%), 1\/1 tablespace\r\npg_basebackup: write-ahead log end point: 1\/AD000158\r\npg_basebackup: waiting for background process to finish streaming ...\r\npg_basebackup: syncing data to disk ...\r\npg_basebackup: renaming backup_manifest.tmp to backup_manifest\r\npg_basebackup: base backup completed<\/pre>\n<p>&nbsp;<\/p>\n<p>Puis on effectue quelques transactions : cr\u00e9ation d&#8217;une table et insertions de donn\u00e9es sur cette table de test<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5437) [manu] $ &gt; create table backup (nom varchar(20), type varchar(20), date_backup date);\r\nCREATE TABLE\r\nTime: 3.344 ms\r\n\r\n(postgres@[local]:5437) [manu] $ &gt; insert into backup values ('sauvegarde','FULL','2024-07-08 12:00:00');\r\nINSERT 0 1\r\nTime: 3.612 ms\r\n(postgres@[local]:5437) [manu] $ &gt; insert into backup values ('sauvegarde','incremental','2024-07-08 13:00:00');\r\nINSERT 0 1\r\nTime: 1.461 ms\r\n\r\n(postgres@[local]:5437) [manu] $ &gt; select * from backup;\r\nnom | type | date_backup\r\n------------+-------------+-------------\r\nsauvegarde | FULL | 2024-07-08\r\nsauvegarde | incremental | 2024-07-08\r\n(2 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>Rep\u00e9rer le fichier &#8220;backup_manifest&#8221; de la sauvegarde FULL r\u00e9alis\u00e9e dans le dossier &#8220;<strong>\/data\/postgres\/backup\/pg_basebackup\/PG17<\/strong>&#8220;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres PG17]$ ls -lrt backup*\r\n-rw-------. 1 postgres postgres 218 Jul 8 09:19 backup_label\r\n-rw-------. 1 postgres postgres 433295 Jul 8 09:20 backup_manifest<\/pre>\n<p>&nbsp;<\/p>\n<h3>Effectuer une sauvegarde incr\u00e9mentale<\/h3>\n<p>&nbsp;<\/p>\n<p>A partir de l\u00e0, lancer une sauvegarde incr\u00e9mentale. Nous utilisons l&#8217;option &#8220;<strong>-i<\/strong>&#8221; pour indiquer \u00e0 <strong>pg_basebackup<\/strong> ou est situ\u00e9 le &#8220;backup_manifest&#8221; de la derni\u00e8re sauvegarde FULL.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ pg_basebackup -D \/data\/postgres\/backup\/pg_basebackup\/PG17_incr -l &quot;Incremental Backup PG17&quot; -P -v -i \/data\/postgres\/backup\/pg_basebackup\/PG17\/backup_manifest\r\npg_basebackup: initiating base backup, waiting for checkpoint to complete\r\npg_basebackup: checkpoint completed\r\npg_basebackup: write-ahead log start point: 1\/AF000028 on timeline 1\r\npg_basebackup: starting background WAL receiver\r\npg_basebackup: created temporary replication slot &quot;pg_basebackup_8139&quot;\r\n12485\/3097787 kB (100%), 1\/1 tablespace\r\npg_basebackup: write-ahead log end point: 1\/AF000120\r\npg_basebackup: waiting for background process to finish streaming ...\r\npg_basebackup: syncing data to disk ...\r\npg_basebackup: renaming backup_manifest.tmp to backup_manifest\r\npg_basebackup: base backup completed<\/pre>\n<p>&nbsp;<\/p>\n<p>S&#8217;il l&#8217;on compare les deux r\u00e9pertoires de sauvegardes &#8220;<strong>\/data\/postgres\/backup\/pg_basebackup\/PG17<\/strong>&#8221; et &#8220;<strong>\/data\/postgres\/backup\/pg_basebackup\/PG17_incr<\/strong>&#8220;, nous voyons que les tailles sont bien diff\u00e9rentes<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ du -h \/data\/postgres\/backup\/pg_basebackup\/PG17\r\n......\r\n3.0G \/data\/postgres\/backup\/pg_basebackup\/PG17\r\n\r\n[postgres - ]$ du -h \/data\/postgres\/backup\/pg_basebackup\/PG17_incr\r\n......\r\n35M \/data\/postgres\/backup\/pg_basebackup\/PG17_incr<\/pre>\n<p>&nbsp;<\/p>\n<p>Un volume de 3Go pour la sauvegarde FULL de l&#8217;instance contre 35Mo pour l&#8217;incr\u00e9mentale.<br \/>\nLa taille occup\u00e9e par les objets dans chacune des bases est bien plus faible dans la sauvegarde incr\u00e9mentale.<\/p>\n<p>Nous continuons \u00e0 ins\u00e9rer des donn\u00e9es :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\"> [postgres - ]$ psql -d manu\r\n\r\n(postgres@[local]:5437) [manu] $ &gt; select * from backup;\r\nnom | type | date_backup\r\n------------+-------------+-------------\r\nsauvegarde | FULL | 2024-07-08\r\nsauvegarde | incremental | 2024-07-08\r\n(2 rows)\r\n\r\nTime: 0.614 ms\r\n(postgres@[local]:5437) [manu] $ &gt; insert into backup values ('sauvegarde','incremental 2','2024-07-08 14:00:00');\r\nINSERT 0 1\r\nTime: 1.436 ms\r\n(postgres@[local]:5437) [manu] $ &gt; select * from backup;\r\nnom | type | date_backup\r\n------------+---------------+-------------\r\nsauvegarde | FULL | 2024-07-08\r\nsauvegarde | incremental | 2024-07-08\r\nsauvegarde | incremental 2 | 2024-07-08\r\n(3 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>Puis on lance une seconde sauvegarde incr\u00e9mentale :<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ pg_basebackup -D \/data\/postgres\/backup\/pg_basebackup\/PG17_incr_2 -l &quot;Incremental 2 Backup PG17&quot; -P -v -i \/data\/postgres\/backup\/pg_basebackup\/PG17_incr\/backup_manifest\r\npg_basebackup: initiating base backup, waiting for checkpoint to complete\r\npg_basebackup: checkpoint completed\r\npg_basebackup: write-ahead log start point: 1\/B1000028 on timeline 1\r\npg_basebackup: starting background WAL receiver\r\npg_basebackup: created temporary replication slot &quot;pg_basebackup_8313&quot;\r\n12260\/3097787 kB (100%), 1\/1 tablespace\r\npg_basebackup: write-ahead log end point: 1\/B1000120\r\npg_basebackup: waiting for background process to finish streaming ...\r\npg_basebackup: syncing data to disk ...\r\npg_basebackup: renaming backup_manifest.tmp to backup_manifest\r\npg_basebackup: base backup completed<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous remarquons l&#8217;appel au &#8220;backup manifest&#8221; de la derni\u00e8re sauvegarde incr\u00e9mentale pr\u00e9sente dans le r\u00e9pertoire &#8220;<strong>\/data\/postgres\/backup\/pg_basebackup\/PG17_incr<\/strong>&#8220;<\/p>\n<p>Si l&#8217;on regarde la taille de ce nouveau backup<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres pg_basebackup]$ du -h PG17_incr_2\r\n.......\r\n35M PG17_incr_2<\/pre>\n<p>&nbsp;<\/p>\n<p>A nouveau 35 Mo, mais vu le peu de modifications effectu\u00e9es, la taille n&#8217;est pas tr\u00e8s repr\u00e9sentative.<\/p>\n<p>Ce qu&#8217;il faut retenir, c&#8217;est qu&#8217;en fonction du fichier &#8220;backup manifest&#8221; pris lors de l&#8217;appel \u00e0 <strong>pg_basebackup<\/strong>, vous pourrez faire soit<br \/>\n&#8211; une sauvegarde incr\u00e9mentale qui prendra les derni\u00e8res modifications depuis la derni\u00e8re sauvegarde incr\u00e9mentale effectu\u00e9e.<br \/>\n&#8211; une sauvegarde diff\u00e9rentielle qui prendra les modifications faites depuis la derni\u00e8re sauvegarde FULL si vous vous appuyez toujours sur le &#8220;backup manifest&#8221; de votre sauvegarde FULL.<\/p>\n<p>C&#8217;est donc ce fichier json &#8220;backup manifest&#8221; qui a un r\u00f4le essentiel dans l&#8217;\u00e9laboration de votre strat\u00e9gie de sauvegarde au fur et \u00e0 mesure du temps.<\/p>\n<p>&nbsp;<\/p>\n<h2>Et la restauration , comment ca se passe ?<\/h2>\n<p>&nbsp;<\/p>\n<p>Si l&#8217;on souhaite restaurer tous ces jeux de sauvegardes, nous utilisons un nouvel outil qui est &#8220;<strong>pg_combinebackup<\/strong>&#8220;.<br \/>\nCet outil permet de &#8220;merger&#8221; les diff\u00e9rentes sauvegardes dans un et un seul dossier que l&#8217;on restaurera par la suite.<\/p>\n<p>Dans notre exemple, nous avons fait 1 sauvegarde FULL puis 2 incr\u00e9mentales.<br \/>\nNous allons donc restaurer ces 3 jeux de sauvegardes afin de retrouver les donn\u00e9es. A noter qu&#8217;il existe une option &#8220;&#8211;dry-run&#8221; pour tester la commande<\/p>\n<p>Ex\u00e9cuter la commande en prenant en param\u00e8tre les dossiers de sauvegardes dans l&#8217;ordre chronologique.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ pg_combinebackup -n -o \/data\/postgres\/backup\/pg_basebackup\/PG17_ALL \/data\/postgres\/backup\/pg_basebackup\/PG17 \/data\/postgres\/backup\/pg_basebackup\/PG17_incr \/data\/postgres\/backup\/pg_basebackup\/PG17_incr_2 <\/pre>\n<p>Si aucune erreur en sortie, on ex\u00e9cute sans l&#8217;option &#8220;dry run&#8221;.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> [postgres - ]$ pg_combinebackup -o \/data\/postgres\/backup\/pg_basebackup\/PG17_ALL \/data\/postgres\/backup\/pg_basebackup\/PG17 \/data\/postgres\/backup\/pg_basebackup\/PG17_incr \/data\/postgres\/backup\/pg_basebackup\/PG17_incr_2 <\/pre>\n<p>&nbsp;<\/p>\n<p>Le r\u00e9pertoire &#8220;<strong>\/data\/postgres\/backup\/pg_basebackup\/PG17_ALL<\/strong>&#8221; ainsi g\u00e9n\u00e9r\u00e9, doit avoir une taille tr\u00e8s l\u00e9g\u00e8rement sup\u00e9rieure au dossier de la sauvegarde FULL.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ du -h PG17_ALL\r\n....\r\n3.0G PG17_ALL<\/pre>\n<p>&nbsp;<\/p>\n<p>Derni\u00e8re \u00e9tape, nous passons \u00e0 la restauration des donn\u00e9es.<\/p>\n<p>Nous arr\u00eatons l&#8217;instance PG17<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ pg_ctl -D \/data\/postgres\/17\/pg_data\/ stop\r\nwaiting for server to shut down.... done\r\nserver stopped<\/pre>\n<p>Nous supprimons les donn\u00e9es dans $PGDATA<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ rm -rf \/data\/postgres\/17\/pg_data\/* <\/pre>\n<p>Puis nous restaurons ce jeu complet de donn\u00e9es avec une simple copie.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ cp -r \/data\/postgres\/backup\/pg_basebackup\/PG17_ALL\/* \/data\/postgres\/17\/pg_data\/ <\/pre>\n<p>Enfin red\u00e9marrons l&#8217;instance<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ pg_ctl -D \/data\/postgres\/17\/pg_data\/ start\r\nwaiting for server to start....2024-07-08 10:51:45.671 UTC [8909] LOG: redirecting log output to logging collector process\r\n2024-07-08 10:51:45.671 UTC [8909] HINT: Future log output will appear in directory &quot;log&quot;.\r\ndone\r\nserver started<\/pre>\n<p>&nbsp;<\/p>\n<p>Puis contr\u00f4ler que nous r\u00e9cup\u00e9rons bien toutes les lignes de notre table &#8220;backup&#8221;.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres@ip-172-44-2-96 pg_basebackup]$ psql -d manu\r\n(postgres@[local]:5437) [manu] primaire $ &gt; select * from backup;\r\nnom | type | date_backup\r\n------------+---------------+-------------\r\nsauvegarde | FULL | 2024-07-08\r\nsauvegarde | incremental | 2024-07-08\r\nsauvegarde | incremental 2 | 2024-07-08\r\n<\/pre>\n<h3><\/h3>\n<h3><\/h3>\n<h3>Remarques<\/h3>\n<ul>\n<li>Attention, toujours v\u00e9rifier les sauvegardes \u00e0 chaque \u00e9tape avec l&#8217;outil <strong>pg_verifybackup <\/strong>car rien ne garantit qu&#8217;au moment de l&#8217;appel \u00e0 <strong>pg_combinebackup<\/strong> les diff\u00e9rents jeux de sauvegardes FULL et\/ou incr\u00e9mentales ne soient pas corrompus.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li>Assurez vous d&#8217;\u00eatre en mode &#8220;data_checksum&#8221; activ\u00e9 et ne pas changer de mode entre les jeux de backup. Le &#8220;backup manifest&#8221; s&#8217;appuie sur ce param\u00e9trage pour valider les checksums de chaque fichier.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<ul>\n<li>Le mode TAR pour <strong>pg_basebackup<\/strong> n&#8217;est pas compatible pour les sauvegardes full et incr\u00e9mentales m\u00eame si celui ci est possible. Mais c&#8217;est \u00e0 vous de d\u00e9tarer les fichiers &#8220;<strong>base.tar.gz<\/strong>&#8221; Et au moment de la restauration\u00a0 avec &#8220;<strong>pg_combinebackup<\/strong>&#8220;, une possible corruption est rencontr\u00e9e.<\/li>\n<\/ul>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">[postgres - ]$ pg_combinebackup -o \/data\/postgres\/backup\/pg_basebackup\/PG17_all_tar \/data\/postgres\/backup\/pg_basebackup\/PG17_TAR \/data\/postgres\/backup\/pg_basebackup\/PG17_incr_TAR\r\npg_combinebackup: error: could not write to file &quot;\/data\/postgres\/backup\/pg_basebackup\/PG17_all_tar\/base\/25284\/25332&quot;, offset 122470400: wrote 380928 of 409600\r\npg_combinebackup: removing output directory &quot;\/data\/postgres\/backup\/pg_basebackup\/PG17_all_tar&quot; <\/pre>\n<p>La compression a potentiellement ajout\u00e9e une corruption ne rendant pas possible l&#8217;op\u00e9ration de &#8220;merge&#8221; des donn\u00e9es.<\/p>\n<p>&nbsp;<\/p>\n<ul>\n<li>La restauration PITR est possible bien entendu. N&#8217;oubliez pas de cr\u00e9er le &#8220;<strong>recovery.signal<\/strong>&#8221; dans $PGDATA et de d\u00e9finir dans le fichier &#8220;postgresql.conf&#8221; les quelques param\u00e8tres suivants\n<ul>\n<li><span style=\"color: #3366ff;\">recovery_target_name\u00a0<\/span><\/li>\n<li><span style=\"color: #3366ff;\">recovery_target_time <\/span><\/li>\n<li><span style=\"color: #3366ff;\">recovery_target_xid <\/span><\/li>\n<li><span style=\"color: #3366ff;\">recovery_target_lsn <\/span><\/li>\n<li><span style=\"color: #808000;\">recovery_target_inclusive = off ou on<\/span><\/li>\n<li><span style=\"color: #808000;\">recovery_target_timeline = &#8216;latest&#8217; <\/span><\/li>\n<li><span style=\"color: #808000;\">recovery_target_action = &#8216;pause&#8217;\u00a0<\/span><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>\ud83d\ude42<\/p>\n<p>&nbsp;<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10584&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10584&#038;title=PostgreSQL%2017%20%3A%20des%20sauvegardes%20incr%C3%A9mentales%20avec%20pg_basebackup\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%2017%20%3A%20des%20sauvegardes%20incr%C3%A9mentales%20avec%20pg_basebackup&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10584\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>&nbsp; Bonjour Les 11 et 12 juin derniers, nous \u00e9tions aux journ\u00e9es PGDAY \u00e0 Lille pour d\u00e9couvrir les nouveaut\u00e9s autour de PostgreSQL. Cette conf\u00e9rence regroupe diff\u00e9rents professionnels, de la communaut\u00e9 francophone, qui agissent en contribuant sur des sujets techniques mais&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":32,"featured_media":10593,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1,266],"tags":[482,288],"class_list":["post-10584","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-non-classe","category-postgresql","tag-incremental","tag-sauvegardes"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup - Capdata TECH BLOG<\/title>\n<meta name=\"description\" content=\"Effectuer des sauvegarde incr\u00e9mentales avec pg_basebackup.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Effectuer des sauvegarde incr\u00e9mentales avec pg_basebackup.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2024-07-16T11:24:05+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-07-17T10:29:01+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2024\/07\/backup.png\" \/>\n\t<meta property=\"og:image:width\" content=\"300\" \/>\n\t<meta property=\"og:image:height\" content=\"200\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Emmanuel RAMI\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emmanuel RAMI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"13 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\"},\"author\":{\"name\":\"Emmanuel RAMI\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\"},\"headline\":\"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup\",\"datePublished\":\"2024-07-16T11:24:05+00:00\",\"dateModified\":\"2024-07-17T10:29:01+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\"},\"wordCount\":3035,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"keywords\":[\"incr\u00e9mental\",\"Sauvegardes\"],\"articleSection\":{\"1\":\"PostgreSQL\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\",\"name\":\"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2024-07-16T11:24:05+00:00\",\"dateModified\":\"2024-07-17T10:29:01+00:00\",\"description\":\"Effectuer des sauvegarde incr\u00e9mentales avec pg_basebackup.\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\",\"name\":\"Emmanuel RAMI\",\"sameAs\":[\"https:\/\/blog.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup - Capdata TECH BLOG","description":"Effectuer des sauvegarde incr\u00e9mentales avec pg_basebackup.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/","og_locale":"fr_FR","og_type":"article","og_title":"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup - Capdata TECH BLOG","og_description":"Effectuer des sauvegarde incr\u00e9mentales avec pg_basebackup.","og_url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2024-07-16T11:24:05+00:00","article_modified_time":"2024-07-17T10:29:01+00:00","og_image":[{"width":300,"height":200,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2024\/07\/backup.png","type":"image\/png"}],"author":"Emmanuel RAMI","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Emmanuel RAMI","Dur\u00e9e de lecture estim\u00e9e":"13 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/"},"author":{"name":"Emmanuel RAMI","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae"},"headline":"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup","datePublished":"2024-07-16T11:24:05+00:00","dateModified":"2024-07-17T10:29:01+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/"},"wordCount":3035,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"keywords":["incr\u00e9mental","Sauvegardes"],"articleSection":{"1":"PostgreSQL"},"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/","url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/","name":"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2024-07-16T11:24:05+00:00","dateModified":"2024-07-17T10:29:01+00:00","description":"Effectuer des sauvegarde incr\u00e9mentales avec pg_basebackup.","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-17-sauvegardes-incrementales\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"PostgreSQL 17 : des sauvegardes incr\u00e9mentales avec pg_basebackup"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae","name":"Emmanuel RAMI","sameAs":["https:\/\/blog.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10584","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=10584"}],"version-history":[{"count":27,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10584\/revisions"}],"predecessor-version":[{"id":10616,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10584\/revisions\/10616"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/10593"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=10584"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=10584"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=10584"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}