{"id":10670,"date":"2025-02-26T12:00:21","date_gmt":"2025-02-26T11:00:21","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=10670"},"modified":"2025-02-20T14:41:13","modified_gmt":"2025-02-20T13:41:13","slug":"postgresql-optimiser-vos-operations-vacuum-et-analyze","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/","title":{"rendered":"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze !"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10670&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10670&#038;title=PostgreSQL%20%3A%20optimiser%20vos%20op%C3%A9rations%20vacuum%20et%20analyze%20%21\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%20%3A%20optimiser%20vos%20op%C3%A9rations%20vacuum%20et%20analyze%20%21&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10670\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-10677\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2025\/02\/vacuum-300x200.png\" alt=\"\" width=\"300\" height=\"200\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2025\/02\/vacuum-300x200.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2025\/02\/vacuum-768x513.png 768w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2025\/02\/vacuum.png 800w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p>Hello<\/p>\n<p>pour commencer cette ann\u00e9e 2025 , voici un petit article PostgreSQL ou l&#8217;on vous pr\u00e9sente comment optimiser les op\u00e9rations de maintenance que sont les VACUUM et les ANALYZE.<\/p>\n<p>Ces 2 op\u00e9rations sont essentielles pour conserver des performances optimales pour notre instance et garantir au planner de construire des plans d&#8217;ex\u00e9cutions optimis\u00e9s.<\/p>\n<p>les op\u00e9rations VACUUM et\/ou ANALYZE peuvent \u00eatre longues et sources de nombreuses \u00e9critures dans les WALs sur des tables volumineuses.<br \/>\nC&#8217;est pourquoi, et ce depuis la version <strong>PostgreSQL 16<\/strong>, il est possible de modifier le comportement de ces op\u00e9rations en affectant une taille de buffer. Il s&#8217;agit du &#8220;<a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-resource.html#GUC-VACUUM-BUFFER-USAGE-LIMIT\">buffer_usage_limit<\/a>&#8220;.<\/p>\n<p>&nbsp;<\/p>\n<h2>Principe de fonctionnement.<\/h2>\n<p>Ce proc\u00e9d\u00e9 s&#8217;appuie sur le principe de &#8220;ring buffer&#8221; configur\u00e9 pour PostgreSQL.<\/p>\n<p>Attention, \u00e0 ne pas confondre, \u00e9videment, avec les &#8220;rings buffer&#8221; de SQL Server !!<\/p>\n<p>Pour rappel, PostgreSQL utilise cette strat\u00e9gie de &#8220;ring buffer&#8221; afin de d\u00e9dier un espace m\u00e9moire pour les op\u00e9rations lourdes , telles, la lecture s\u00e9quentielle sur une table volumineuse, un CREATE TABLE AS SELECT, un COPY&#8230;. mais aussi un VACUUM !<\/p>\n<p>En fait, cet espace est utilis\u00e9 pour \u00e9viter de &#8220;flusher&#8221; sur disque de mani\u00e8re trop brutale les pages en m\u00e9moire mont\u00e9es dans le &#8220;<strong>shared buffer<\/strong>&#8220;. Cela p\u00e9naliserait en grande partie toute op\u00e9ration concurrente \u00e0 notre traitement actif puisqu&#8217;elle n&#8217;aurait plus d&#8217;espace pour mettre ses propres pages en m\u00e9moire.<\/p>\n<p>Jusqu&#8217;\u00e0 la version PostgreSQL 16, cet espace m\u00e9moire \u00e9tait d\u00e9fini \u00e0<strong>\u00a0256Ko<\/strong>. Ainsi, au cours d&#8217;une lecture s\u00e9quentielle, chaque page\u00a0 de <strong>8Ko<\/strong> par d\u00e9faut, est mont\u00e9e en m\u00e9moire dans cet espace si le nombre de pages totales \u00e0 traiter pour la table, d\u00e9passe 1\/4 du param\u00e8tre &#8220;<strong>shared_buffer<\/strong>&#8220;.<\/p>\n<p>Il en est de m\u00eame pour une op\u00e9ration VACUUM ou ANALYZE qui utilise \u00e9galement ce ring buffer et permet d&#8217;optimiser cette op\u00e9ration.<\/p>\n<p>&nbsp;<\/p>\n<h3>Nouveaut\u00e9s PostgreSQL 16 et PostgreSQL 17<\/h3>\n<p>&nbsp;<\/p>\n<p>Depuis la version PostgreSQL 16, il est possible de configurer la taille du buffer de fa\u00e7on unitaire. Par exemple, lors d&#8217;un VACUUM, il est tout \u00e0 fait possible de choisir une valeur pour &#8220;<strong>BUFFER_USAGE_LIMIT<\/strong>&#8220;.<\/p>\n<p>Depuis le version PostgreSQL 17, la valeur par d\u00e9faut affect\u00e9e \u00e0 &#8220;<strong>BUFFER_USAGE_LIMIT<\/strong>&#8221; est de <strong>2Mo<\/strong>.<\/p>\n<p>Il vous est possible de param\u00e9trer la valeur de <strong>128Ko<\/strong> jusqu&#8217;\u00e0 <strong>16Go<\/strong>. Attention, cependant, cette valeur ne peux exc\u00e9der <strong>1\/8<\/strong> du param\u00e8tre &#8220;<strong>shared_buffer<\/strong>&#8220;.<br \/>\nSi vous faites le calcul, pour un serveur comportant <strong>32Go de RAM<\/strong>, vous ne pourrez obtenir, au plus, <strong>1Go<\/strong> pour votre ring buffer.<\/p>\n<p>&nbsp;<\/p>\n<h5>Cas d&#8217;utilisation pour un VACUUM<\/h5>\n<p>&nbsp;<\/p>\n<p>Sur une instance PostgreSQL 13, nous lan\u00e7ons un VACUUM simple sur une table de 2,5Go. Nous utilisons une base exemple cr\u00e9\u00e9e via &#8220;pgbench&#8221;.<\/p>\n<p>Nous avons utilis\u00e9 les options &#8220;<strong>DISABLE_PAGE_SKIPPING<\/strong>&#8221; pour analyser, dans un premier temps, tous les blocs de notre table et ne pas sur baser sur les informations de la visibility_map.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5433) [pgbenchmark] primaire $  vacuum (verbose,DISABLE_PAGE_SKIPPING) public.pgbench_accounts;\r\nINFO: 00000: aggressively vacuuming &quot;public.pgbench_accounts&quot;\r\nLOCATION: lazy_scan_heap, vacuumlazy.c:797\r\nINFO: 00000: &quot;pgbench_accounts&quot;: found 0 removable, 20000000 nonremovable row versions in 327869 out of 327869 pages\r\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 196215\r\nThere were 0 unused item identifiers.\r\nSkipped 0 pages due to buffer pins, 0 frozen pages.\r\n0 pages are entirely empty.\r\nCPU: user: 1.34 s, system: 0.67 s, elapsed: 18.90 s.\r\nLOCATION: lazy_scan_heap, vacuumlazy.c:1759\r\nVACUUM\r\nTime: 18920.292 ms (00:18.920)<\/pre>\n<p>&nbsp;<\/p>\n<p>Le temps pass\u00e9 pour cette op\u00e9ration est d&#8217;un peu plus de 18 secondes en temps CPU pour analyser les 327869 blocs de notre table. Soit une taille de 2.5Go.<\/p>\n<p>Nous effectuons la m\u00eame op\u00e9ration sur cette m\u00eame table, mais sur un moteur <strong>PostgreSQL 17<\/strong>. Nous positionnons le param\u00e8tre <strong>BUFFER_USAGE_LIMIT<\/strong>\u00a0 \u00e0 <strong>8Mo<\/strong>.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5437) [pgbenchmark] primaire $  vacuum (verbose,DISABLE_PAGE_SKIPPING,BUFFER_USAGE_LIMIT '8MB') public.pgbench_accounts;\r\nINFO: 00000: aggressively vacuuming &quot;pgbenchmark.public.pgbench_accounts&quot;\r\nLOCATION: heap_vacuum_rel, vacuumlazy.c:475\r\nINFO: 00000: finished vacuuming &quot;pgbenchmark.public.pgbench_accounts&quot;: index scans: 0\r\npages: 0 removed, 327869 remain, 327869 scanned (100.00% of total)\r\ntuples: 0 removed, 20000000 remain, 0 are dead but not yet removable\r\nremovable cutoff: 208163, which was 0 XIDs old when operation ended\r\nnew relfrozenxid: 208163, which is 5 XIDs ahead of previous value\r\nfrozen: 0 pages from table (0.00% of total) had 0 tuples frozen\r\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\r\navg read rate: 134.208 MB\/s, avg write rate: 0.034 MB\/s\r\nbuffer usage: 330350 hits, 325508 misses, 83 dirtied\r\nWAL usage: 84 records, 83 full page images, 684232 bytes\r\nsystem usage: CPU: user: 1.41 s, system: 0.64 s, elapsed: 18.94 s\r\nLOCATION: heap_vacuum_rel, vacuumlazy.c:763\r\nVACUUM\r\nTime: 18955.006 ms (00:18.955)<\/pre>\n<p>&nbsp;<\/p>\n<p>C&#8217;est\u00a0 \u00e0 peu de chose pr\u00e8s, dans le m\u00eame temps d&#8217;ex\u00e9cution. soir 18 secondes.<\/p>\n<p>La suite consiste \u00e0 red\u00e9marrer l&#8217;instance PostgreSQL 17 et constater les temps d&#8217;ex\u00e9cution pour chaque occurrence de lancement.<br \/>\nNous ex\u00e9cutons donc, les m\u00eames ordres VACUUM, mais sans l&#8217;option &#8220;<strong>DISABLE_PAGE_SKIPPING<\/strong>&#8220;<\/p>\n<p>Sur la version PostgreSQL 13, nous voyons qu&#8217;\u00e0 la premi\u00e8re ex\u00e9cution, juste apr\u00e8s red\u00e9marrage, nous sommes \u00e0 32 millisecondes. Et \u00e0 chaque ex\u00e9cution suivante, nous ne descendons pas en dessous de 15 millisecondes&#8230;.<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5433) [pgbenchmark] primaire $  vacuum public.pgbench_accounts;\r\nVACUUM\r\nTime: 32.149 ms\r\n\r\n(postgres@[local]:5433) [pgbenchmark] primaire $  vacuum public.pgbench_accounts;\r\nVACUUM\r\nTime: 15.001 ms\r\n\r\n(postgres@[local]:5433) [pgbenchmark] primaire $  vacuum public.pgbench_accounts;\r\nVACUUM\r\nTime: 15.295 ms<\/pre>\n<p>&nbsp;<\/p>\n<p>En version PostgreSQL 17, nous faisons \u00e9galement un &#8220;flush&#8221; des pages dans le buffer cache \u00e0 chaque ex\u00e9cution, tout en modifiant la valeur de &#8220;<strong>BUFFER_USAGE_LIMIT<\/strong>&#8220;.<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5437) [pgbenchmark] primaire $  vacuum (BUFFER_USAGE_LIMIT '128kB') public.pgbench_accounts;\r\nVACUUM\r\nTime: 18.098 ms\r\n\r\n(postgres@[local]:5437) [pgbenchmark] primaire $  vacuum (BUFFER_USAGE_LIMIT '8MB') public.pgbench_accounts;\r\nVACUUM\r\nTime: 6.461 ms\r\n\r\n(postgres@[local]:5437) [pgbenchmark] primaire $  vacuum (BUFFER_USAGE_LIMIT '16MB') public.pgbench_accounts;\r\nVACUUM\r\nTime: 4.333 ms<\/pre>\n<p>&nbsp;<\/p>\n<p>Le constat est simple, plus nous augmentons le &#8220;<strong>BUFFER_USAGE_LIMIT<\/strong>&#8220;, et plus le temps d&#8217;ex\u00e9cution du VACUUM diminue.<\/p>\n<p>Nous comprendrons donc que sur une table de plus de 100Go, le gain peut \u00eatre assez important.<\/p>\n<p>&nbsp;<\/p>\n<h5>Cas d&#8217;utilisation pour un ANALYZE<\/h5>\n<p>&nbsp;<\/p>\n<p>Pour l&#8217;instance PostgreSQL 13, nous ex\u00e9cutons le calcul de statistiques sur cette m\u00eame table<\/p>\n<p>&nbsp;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5433) [pgbenchmark] primaire $  vacuum (analyze,verbose) public.pgbench_accounts;\r\nINFO: 00000: vacuuming &quot;public.pgbench_accounts&quot;\r\nLOCATION: lazy_scan_heap, vacuumlazy.c:802\r\nINFO: 00000: &quot;pgbench_accounts&quot;: found 0 removable, 52 nonremovable row versions in 1 out of 327869 pages\r\nDETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 196278\r\nThere were 0 unused item identifiers.\r\nSkipped 0 pages due to buffer pins, 0 frozen pages.\r\n0 pages are entirely empty.\r\nCPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\r\nLOCATION: lazy_scan_heap, vacuumlazy.c:1759\r\nINFO: 00000: analyzing &quot;public.pgbench_accounts&quot;\r\nLOCATION: do_analyze_rel, analyze.c:336\r\nINFO: 00000: &quot;pgbench_accounts&quot;: scanned 30000 of 327869 pages, containing 1830000 live rows and 0 dead rows; 30000 rows in sample, 20000009 estimated total rows\r\nLOCATION: acquire_sample_rows, analyze.c:1190\r\nVACUUM\r\nTime: 29526.404 ms (00:29.526)<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous sommes autour de 29 secondes pour analyser 30000 pages sur les 32769 que composent cette table.<br \/>\nLe sample est choisi en fonction de la valeur de &#8220;<strong>default_statistics_target<\/strong>&#8220;, par d\u00e9faut \u00e0 100, avec 30000 lignes analys\u00e9es par d\u00e9faut.<\/p>\n<p>&nbsp;<\/p>\n<p>Sur la version PostgreSQL 17, les r\u00e9sultats sont les suivants<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">(postgres@[local]:5437) [pgbenchmark] primaire $  vacuum (analyze,verbose,BUFFER_USAGE_LIMIT '128kB') public.pgbench_accounts;\r\nINFO: 00000: vacuuming &quot;pgbenchmark.public.pgbench_accounts&quot;\r\nLOCATION: heap_vacuum_rel, vacuumlazy.c:480\r\nINFO: 00000: finished vacuuming &quot;pgbenchmark.public.pgbench_accounts&quot;: index scans: 0\r\npages: 0 removed, 327869 remain, 1 scanned (0.00% of total)\r\ntuples: 0 removed, 20000000 remain, 0 are dead but not yet removable\r\nremovable cutoff: 208163, which was 0 XIDs old when operation ended\r\nfrozen: 0 pages from table (0.00% of total) had 0 tuples frozen\r\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\r\navg read rate: 46.211 MB\/s, avg write rate: 0.000 MB\/s\r\nbuffer usage: 37 hits, 100 misses, 0 dirtied\r\nWAL usage: 0 records, 0 full page images, 0 bytes\r\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s\r\nLOCATION: heap_vacuum_rel, vacuumlazy.c:763\r\nINFO: 00000: analyzing &quot;public.pgbench_accounts&quot;\r\nLOCATION: do_analyze_rel, analyze.c:321\r\nINFO: 00000: &quot;pgbench_accounts&quot;: scanned 30000 of 327869 pages, containing 1830000 live rows and 0 dead rows; 30000 rows in sample, 20000009 estimated total rows\r\nLOCATION: acquire_sample_rows, analyze.c:1301\r\nVACUUM\r\nTime: 8151.201 ms (00:08.151)\r\n\r\n(postgres@[local]:5437) [pgbenchmark] primaire $  vacuum (analyze,BUFFER_USAGE_LIMIT '8MB') public.pgbench_accounts;\r\nVACUUM\r\nTime: 7282.546 ms (00:07.283)<\/pre>\n<p>&nbsp;<\/p>\n<p>Les diff\u00e9rences de gains sont moins impressionnantes que sur un simple VACUUM \u00e0 chaque changement de &#8220;<strong>BUFFER_USAGE_LIMIT<\/strong>&#8220;, mais on voit qu&#8217;en version PostgreSQL 17, nous sommes tout de m\u00eame 4 fois plus rapide qu&#8217;en version PostgreSQL 13.<\/p>\n<p>&nbsp;<\/p>\n<h4>Remarques<\/h4>\n<p>Gardez \u00e0 l&#8217;esprit que la valeur de &#8220;<strong>BUFFER_USAGE_LIMIT&#8221;<\/strong> est plafonn\u00e9e \u00e0 <strong>1\/8 <\/strong>de<strong> &#8220;shared_buffer&#8221;<\/strong>. Inutile donc de mettre \u00e0 1024Mo, si vous ne poss\u00e9dez que 8Go de RAM.<\/p>\n<p>Attention si vous mettez une valeur trop grande, les transactions concurrentes effectuant des lectures s\u00e9quentielles seront p\u00e9nalis\u00e9es par les op\u00e9rations VACUUM. D&#8217;ailleurs, il est possible de mettre &#8220;<strong>BUFFER_USAGE_LIMIT<\/strong>&#8221; \u00e0 0, mais ceci n&#8217;est pas conseill\u00e9 lors d&#8217;une activit\u00e9 transactionnelle en cours.<\/p>\n<p>Pour aller plus loin dans l&#8217;optimisation d&#8217;une op\u00e9ration de vacuum, vous pouvez \u00e9galement passer le param\u00e8tre &#8220;<strong>INDEX_CLEANUP<\/strong>&#8221; \u00e0 <strong>off<\/strong>. Ceci aura pour effet de ne pas s&#8217;occuper de traiter les entr\u00e9es des index qui pointent sur les lignes mortes de la table.<br \/>\nUn &#8220;<strong>REINDEX<\/strong>&#8221; sera alors n\u00e9cessaire \u00e0 la fin du VACUUM sur les index de la table.<\/p>\n<p>De plus, il est possible de positionner l&#8217;option &#8220;<strong>SKIP_DATABASE_STATS<\/strong>&#8221; afin d&#8217;indiquer \u00e0 l&#8217;ordre VACUUM de ne pas rechercher l&#8217;ID de transaction le plus ancien pour l&#8217;ensemble des tables de la base et de geler celui-ci (datfrozenid).<\/p>\n<p>Les op\u00e9rations VACUUM sur les grosses tables seront bien entendu optimis\u00e9es mais attention aux plages de maintenance choisies !!<\/p>\n<p>&nbsp;<\/p>\n<p>Bonne journ\u00e9e \u00e0 vous.<\/p>\n<p>Emmanuel Rami<\/p>\n<p>&nbsp;<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10670&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10670&#038;title=PostgreSQL%20%3A%20optimiser%20vos%20op%C3%A9rations%20vacuum%20et%20analyze%20%21\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%20%3A%20optimiser%20vos%20op%C3%A9rations%20vacuum%20et%20analyze%20%21&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10670\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Hello pour commencer cette ann\u00e9e 2025 , voici un petit article PostgreSQL ou l&#8217;on vous pr\u00e9sente comment optimiser les op\u00e9rations de maintenance que sont les VACUUM et les ANALYZE. Ces 2 op\u00e9rations sont essentielles pour conserver des performances optimales pour&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":32,"featured_media":10688,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[266],"tags":[487,486,349],"class_list":["post-10670","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-postgresql","tag-analyze","tag-ring-buffer","tag-vacuum"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze ! - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze ! - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Hello pour commencer cette ann\u00e9e 2025 , voici un petit article PostgreSQL ou l&#8217;on vous pr\u00e9sente comment optimiser les op\u00e9rations de maintenance que sont les VACUUM et les ANALYZE. Ces 2 op\u00e9rations sont essentielles pour conserver des performances optimales pour&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-26T11:00:21+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-02-20T13:41:13+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2025\/02\/postgres-vacuum.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1080\" \/>\n\t<meta property=\"og:image:height\" content=\"600\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Emmanuel RAMI\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emmanuel RAMI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\"},\"author\":{\"name\":\"Emmanuel RAMI\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\"},\"headline\":\"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze !\",\"datePublished\":\"2025-02-26T11:00:21+00:00\",\"dateModified\":\"2025-02-20T13:41:13+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\"},\"wordCount\":1623,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"keywords\":[\"analyze\",\"ring buffer\",\"vacuum\"],\"articleSection\":[\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\",\"name\":\"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze ! - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2025-02-26T11:00:21+00:00\",\"dateModified\":\"2025-02-20T13:41:13+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze !\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\",\"name\":\"Emmanuel RAMI\",\"sameAs\":[\"https:\/\/blog.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze ! - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/","og_locale":"fr_FR","og_type":"article","og_title":"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze ! - Capdata TECH BLOG","og_description":"Hello pour commencer cette ann\u00e9e 2025 , voici un petit article PostgreSQL ou l&#8217;on vous pr\u00e9sente comment optimiser les op\u00e9rations de maintenance que sont les VACUUM et les ANALYZE. Ces 2 op\u00e9rations sont essentielles pour conserver des performances optimales pour&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2025-02-26T11:00:21+00:00","article_modified_time":"2025-02-20T13:41:13+00:00","og_image":[{"width":1080,"height":600,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2025\/02\/postgres-vacuum.png","type":"image\/png"}],"author":"Emmanuel RAMI","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Emmanuel RAMI","Dur\u00e9e de lecture estim\u00e9e":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/"},"author":{"name":"Emmanuel RAMI","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae"},"headline":"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze !","datePublished":"2025-02-26T11:00:21+00:00","dateModified":"2025-02-20T13:41:13+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/"},"wordCount":1623,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"keywords":["analyze","ring buffer","vacuum"],"articleSection":["PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/","url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/","name":"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze ! - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2025-02-26T11:00:21+00:00","dateModified":"2025-02-20T13:41:13+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-optimiser-vos-operations-vacuum-et-analyze\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"PostgreSQL : optimiser vos op\u00e9rations vacuum et analyze !"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae","name":"Emmanuel RAMI","sameAs":["https:\/\/blog.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10670","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=10670"}],"version-history":[{"count":17,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10670\/revisions"}],"predecessor-version":[{"id":10694,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10670\/revisions\/10694"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/10688"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=10670"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=10670"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=10670"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}