{"id":8321,"date":"2020-10-30T12:51:42","date_gmt":"2020-10-30T11:51:42","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=8321"},"modified":"2020-10-30T12:51:35","modified_gmt":"2020-10-30T11:51:35","slug":"postgresql-13-les-nouveautes-interessantes","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/","title":{"rendered":"PostgreSQL 13 : pr\u00e9sentation"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8321&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8321&#038;title=PostgreSQL%2013%20%3A%20pr%C3%A9sentation\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%2013%20%3A%20pr%C3%A9sentation&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8321\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p>&nbsp;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-8347\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/pool_billiard_ball_13-512-300x300.png\" alt=\"\" width=\"192\" height=\"192\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/pool_billiard_ball_13-512-300x300.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/pool_billiard_ball_13-512-150x150.png 150w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/pool_billiard_ball_13-512-144x144.png 144w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/pool_billiard_ball_13-512-50x50.png 50w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/pool_billiard_ball_13-512.png 512w\" sizes=\"auto, (max-width: 192px) 100vw, 192px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>&nbsp;<\/p>\n<p>La nouvelle release officielle de PostgreSQL est sorti le 24 septembre 2020. Place \u00e0 la version 13.<\/p>\n<p>Celle ci est bien entendu, t\u00e9l\u00e9chargeable sur le site <a href=\"https:\/\/www.postgresql.org\/download\/\">PostgreSQL.org<\/a> sur lequel vous pourrez retrouver les sources, les derniers packages.<br \/>\nIl sera aussi, comme pour les autres versions, possible de mettre \u00e0 jour vos fichiers &#8220;sources&#8221; pour les utilisateurs Linux et profiter de cette nouvelle version.<\/p>\n<p>&nbsp;<\/p>\n<h2>Les nouveaut\u00e9s<\/h2>\n<p>Comme \u00e0 chaque version, PostgreSQL nous apporte son lot de nouveaut\u00e9s pour cette release 13.<br \/>\nNotons cette liste, non exhaustive, des quelques am\u00e9liorations dans les domaines suivants :<\/p>\n<ul>\n<li>Performances\n<ul>\n<li>Mise en place des statistiques \u00e9tendues pour l&#8217;am\u00e9lioration des plans d&#8217;ex\u00e9cution, nouvelle vue &#8220;<a href=\"https:\/\/www.postgresql.org\/docs\/13\/catalog-pg-statistic-ext-data.html\">pg_statistic_ext_data<\/a>&#8220;<\/li>\n<li>Le tri incr\u00e9mentiel, afin d&#8217;acc\u00e9l\u00e9rer le tri des donn\u00e9es sur jointures, contr\u00f4l\u00e9 via le param\u00e8tre <a href=\"https:\/\/www.postgresql.org\/docs\/13\/runtime-config-query.html#GUC-ENABLE-INCREMENTAL-SORT\">enable_incremental_sort<\/a><\/li>\n<li>Optimisation sur les index B-Tree avec l&#8217;agr\u00e9gation des valeurs en doublons.<\/li>\n<li>Agr\u00e9gation de hachage et utilisation du stockage disque pour l&#8217;utilisation de grands ensembles, le tri peut aller au del de la valeur &#8220;work_mem&#8221;<\/li>\n<li>fonction FETCH FIRST WITH TIES pour ramener les n meilleurs r\u00e9sultats avec leurs ex\u00e6quos (\u00e9quivaut \u00e0 un rank &lt;= n).<\/li>\n<\/ul>\n<\/li>\n<li>Maintenance\n<ul>\n<li><a href=\"https:\/\/www.postgresql.org\/docs\/13\/runtime-config-replication.html#GUC-WAL-KEEP-SIZE\">wal_keep_size<\/a> remplace wal_keep_segments. La valeur \u00e0 d\u00e9finir sera donc : Nb Wals x 16Mb.<\/li>\n<li>Parall\u00e9lisation du VACUUM de table pour les index associ\u00e9s afin d&#8217;acc\u00e9l\u00e9rer le traitement ( clause <a href=\"https:\/\/www.postgresql.org\/docs\/13\/sql-vacuum.html#PARALLEL\">PARALLEL<\/a>)<\/li>\n<li>Jointure directe entre partitions de tables &#8220;<a href=\"https:\/\/www.postgresql.org\/docs\/13\/runtime-config-query.html#GUC-ENABLE-PARTITIONWISE-JOIN\">partitionwise joins<\/a>&#8220;<\/li>\n<li>Estimation de la taille d&#8217;un backup avec &#8220;pg_basebackup&#8221; par d\u00e9faut.<\/li>\n<li>Outil &#8220;<a href=\"https:\/\/www.postgresql.org\/docs\/13\/app-pgverifybackup.html\">pg_verifybackup<\/a>&#8221; afin de v\u00e9rifier l&#8217;int\u00e9grit\u00e9 d&#8217;un backup via &#8220;pg_basebackup&#8221; en mode &#8220;plain&#8221;.<\/li>\n<li>Les &#8220;trusted extensions&#8221;, un &#8216;superuser&#8217; peut permettre \u00e0 un autre utilisateur d&#8217;installer une extension qu&#8217;il aura valid\u00e9 auparavant.<\/li>\n<li>&#8220;pg_dump&#8221; peut inclure les donn\u00e9es des tables externes (foreign table) avec le param\u00e8tre &#8220;&#8211;include-foreign-data&#8221;<\/li>\n<li>Param\u00e8tre &#8220;<a href=\"https:\/\/www.postgresql.org\/docs\/13\/runtime-config-resource.html#GUC-MAINTENANCE-IO-CONCURRENCY\">maintenance_io_concurrency<\/a>&#8221; permettant de parall\u00e9liser certaines op\u00e9rations\u00a0 des t\u00e2ches de maintenance<\/li>\n<\/ul>\n<\/li>\n<li>Monitoring\n<ul>\n<li>Surveiller la progression d&#8217;un backup en cours avec la vue <a href=\"https:\/\/www.postgresql.org\/docs\/13\/progress-reporting.html#BASEBACKUP-PROGRESS-REPORTING\">pg_stat_progress_basebackup<\/a><\/li>\n<li>Surveiller la progression des stats en cours avec la vue <a href=\"https:\/\/www.postgresql.org\/docs\/13\/progress-reporting.html#ANALYZE-PROGRESS-REPORTING\">pg_stat_progress_analyze<\/a><\/li>\n<li>Ajout du champ &#8220;leader_pid&#8221; dans &#8216;pg_stat_activty&#8221; dans le cas de l&#8217;utilisation du parall\u00e9lisme<\/li>\n<li>Granularit\u00e9 plus fine sur les informations dans la m\u00e9moire partag\u00e9e avec la vue &#8220;<a href=\"https:\/\/www.postgresql.org\/docs\/13\/view-pg-shmem-allocations.html\">pg_shmem_allocations<\/a>&#8220;<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<h2>Quelques exemples d&#8217;am\u00e9liorations<\/h2>\n<p>&nbsp;<\/p>\n<p>Regardons de plus pr\u00e8s les quelques avanc\u00e9es de cette nouvelle release. Voyons si celles ci sont r\u00e9ellement pertinentes !<\/p>\n<h3><span style=\"color: #008080;\">Le paritionning g\u00e9n\u00e9r\u00e9 par PgBench<\/span><\/h3>\n<p>Nous pourrons, avec la release 13, cr\u00e9er des tables partitionn\u00e9es via &#8220;pgbench&#8221;. L&#8217;option est la suivante<\/p>\n<p>&nbsp;<\/p>\n<pre>[postgres@pg13]$ <span style=\"color: #008080;\">pgbench --help | grep -i partition<\/span>\r\n<span style=\"color: #993300;\">--partition-method<\/span>=(range|hash)\r\npartition pgbench_accounts with this method (default: range)\r\n<span style=\"color: #993300;\">--partitions<\/span>=NUM partition pgbench_accounts into NUM parts (default: 0)<\/pre>\n<p>&nbsp;<\/p>\n<p>Pour notre article, partons sur un partitionnement de type range avec 10 partitions pour le moment.<\/p>\n<p>&nbsp;<\/p>\n<pre>[postgres@pg13]$ <span style=\"color: #008080;\">pgbench -i -s 10 --partitions=10 --partition-method=range --foreign-keys -p 5433 manuelo<\/span>\r\ndropping old tables...\r\nNOTICE: table \"pgbench_accounts\" does not exist, skipping\r\nNOTICE: table \"pgbench_branches\" does not exist, skipping\r\nNOTICE: table \"pgbench_history\" does not exist, skipping\r\nNOTICE: table \"pgbench_tellers\" does not exist, skipping\r\ncreating tables...\r\ncreating 10 partitions...\r\ngenerating data (client-side)...\r\n1000000 of 1000000 tuples (100%) done (elapsed 5.75 s, remaining 0.00 s)\r\nvacuuming...\r\ncreating primary keys...\r\ncreating foreign keys...\r\ndone in 9.51 s (drop tables 0.00 s, create tables 0.04 s, client-side generate 5.80 s, vacuum 0.49 s, primary keys 2.80 s, foreign keys 0.37 s).<\/pre>\n<p>S&#8217;il l&#8217;on liste les objets cr\u00e9\u00e9s dans notre base &#8216;manuelo&#8217;<\/p>\n<p>&nbsp;<\/p>\n<pre>manuelo=#<span style=\"color: #008080;\"> \\dt+<\/span>\r\nList of relations\r\nSchema | Name | Type | Owner | Persistence | Size | Description\r\n--------+---------------------+-------------------+----------+-------------+------------+-------------\r\npublic | pgbench_accounts | partitioned table | postgres | permanent | 0 bytes |\r\npublic | pgbench_accounts_1 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_10 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_2 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_3 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_4 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_5 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_6 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_7 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_8 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_accounts_9 | table | postgres | permanent | 13 MB |\r\npublic | pgbench_branches | table | postgres | permanent | 40 kB |\r\npublic | pgbench_history | table | postgres | permanent | 0 bytes |\r\npublic | pgbench_tellers | table | postgres | permanent | 40 kB |<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous retrouvons nos tables &#8216;pgbench&#8217; cr\u00e9\u00e9es sur le sch\u00e9ma public, dont le propri\u00e9taire est &#8216;postgres&#8217;.<\/p>\n<p>Nous remarquons la cr\u00e9ation de &#8220;pgbench_accounts&#8221; qui est partitionn\u00e9e.<\/p>\n<p>Afin d&#8217;effectuer les tests de comparaison n\u00e9cessaires pour la suite de l&#8217;article, on copie les donn\u00e9es de la base &#8220;manuelo&#8221; sur un moteur PostgreSQL 12. La m\u00e9thode &#8220;pg_dump\/pg_restore&#8221; sera utilis\u00e9e pour cette copie<\/p>\n<p>&nbsp;<\/p>\n<pre>[postgres@<span style=\"color: #800080;\"><strong>pg13<\/strong><\/span>]$ pg_dump -v -f manuelo.dmp -n public manuelo\r\n[postgres@<span style=\"color: #008000;\"><strong>pg12<\/strong><\/span>]$ psql -d manuelo -f manuelo.dmp\r\n\r\n<\/pre>\n<h3><span style=\"color: #008080;\">Le FETCH FIRST WITH TIES \u00e7a apporte quoi au juste ?<\/span><\/h3>\n<p>Dans certaines op\u00e9rations de tris sur une requ\u00eate, il est parfois utile de regarder quelles sont les n valeurs les plus fortes, en prenant en compte \u00e9galement leurs ex\u00e6quos. Cela est possible en passant par un &#8220;ranking&#8221; via une sous requ\u00eate d&#8217;un ensemble donn\u00e9e. Avec PostgreSQL 13, ceci est simplifier avec l&#8217;\u00e9criture &#8220;WITH TIES&#8221;.<br \/>\nEn version postgreSQL 12, on connaissait d\u00e9j\u00e0 le FETCH FIRST n ROWS ONLY. Voyons ce que peut apporter le WITH TIES.<\/p>\n<p>Reprenons nos donn\u00e9es charg\u00e9es via PgBench auparavant.<\/p>\n<h4><span style=\"color: #993366;\">PostgreSQL 12<\/span><\/h4>\n<pre>manuelo=# show server_version;\r\nserver_version\r\n----------------\r\n12.0\r\n(1 row)<\/pre>\n<p>&nbsp;<\/p>\n<p>On sait que notre table &#8220;pgbench_accounts&#8221; est partitionn\u00e9e m\u00e9thode\u00a0 RANGE sur le champ &#8220;aid&#8221;.<\/p>\n<pre>manuelo=# \\d+ pgbench_accounts\r\nPartitioned table \"public.pgbench_accounts\"\r\nColumn | Type | Collation | Nullable | Default | Storage | Stats target | Description\r\n----------+---------------+-----------+----------+---------+----------+--------------+-------------\r\naid | integer | | not null | | plain | |\r\nbid | integer | | | | plain | |\r\nabalance | integer | | | | plain | |\r\nfiller | character(84) | | | | extended | |\r\nPartition key: RANGE (aid)\r\nIndexes:\r\n\"pgbench_accounts_pkey\" PRIMARY KEY, btree (aid)\r\nForeign-key constraints:\r\n\"pgbench_accounts_bid_fkey\" FOREIGN KEY (bid) REFERENCES pgbench_branches(bid)\r\nReferenced by:\r\nTABLE \"pgbench_history\" CONSTRAINT \"pgbench_history_aid_fkey\" FOREIGN KEY (aid) REFERENCES pgbench_accounts(aid)\r\nPartitions: pgbench_accounts_1 FOR VALUES FROM (MINVALUE) TO (100001),\r\npgbench_accounts_10 FOR VALUES FROM (900001) TO (MAXVALUE),\r\npgbench_accounts_2 FOR VALUES FROM (100001) TO (200001),\r\npgbench_accounts_3 FOR VALUES FROM (200001) TO (300001),\r\npgbench_accounts_4 FOR VALUES FROM (300001) TO (400001),\r\npgbench_accounts_5 FOR VALUES FROM (400001) TO (500001),\r\npgbench_accounts_6 FOR VALUES FROM (500001) TO (600001),\r\npgbench_accounts_7 FOR VALUES FROM (600001) TO (700001),\r\npgbench_accounts_8 FOR VALUES FROM (700001) TO (800001),\r\npgbench_accounts_9 FOR VALUES FROM (800001) TO (900001)<\/pre>\n<p>&nbsp;<\/p>\n<p>Chaque partition contient donc 100 000 lignes.<\/p>\n<p>Voyons donc s&#8217;il l&#8217;on interroge le TOP 10 des plus grands &#8220;account id&#8221; et leurs &#8220;branche id&#8221; attach\u00e9s avec tri descendant sur branche id.<\/p>\n<pre>manuelo=# select aid , bid from pgbench_accounts order by bid desc fetch first 10 rows only;\r\naid | bid\r\n--------+-----\r\n900002 | 10\r\n900003 | 10\r\n900004 | 10\r\n900005 | 10\r\n900006 | 10\r\n900007 | 10\r\n900008 | 10\r\n900009 | 10\r\n900010 | 10\r\n900001 | 10\r\n(10 rows)<\/pre>\n<p>le plan d&#8217;ex\u00e9cution est le suivant :<\/p>\n<pre>manuelo=# explain analyze select aid , bid from pgbench_accounts order by bid desc fetch first 10 rows only;\r\nQUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=34369.82..34370.98 rows=10 width=8) (actual time=2736.423..2736.528 rows=10 loops=1)\r\n-&gt; Gather Merge (cost=34369.82..131599.60 rows=833340 width=8) (actual time=2736.421..2736.495 rows=10 loops=1)\r\nWorkers Planned: 2\r\nWorkers Launched: 2\r\n-&gt; Sort (cost=33369.79..34411.47 rows=416670 width=8) (actual time=2681.387..2681.394 rows=10 loops=3)\r\nSort Key: pgbench_accounts_1.bid DESC\r\nSort Method: top-N heapsort Memory: 25kB\r\nWorker 0: Sort Method: top-N heapsort Memory: 25kB\r\nWorker 1: Sort Method: top-N heapsort Memory: 25kB\r\n-&gt; Parallel Append (cost=0.00..24365.70 rows=416670 width=8) (actual time=0.012..1945.815 rows=333333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_1 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.006..240.764 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_2 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.013..296.642 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_3 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.032..220.039 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_4 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.012..254.346 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_5 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.025..202.572 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_6 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.010..296.606 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_7 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.014..69.118 rows=33333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_8 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.006..184.851 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_9 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.007..250.804 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_10 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.010..231.848 rows=100000 loops=1)\r\nPlanning Time: 0.186 ms\r\nExecution Time: 2736.588 ms\r\n(22 rows)\r\n\r\nTime: 2737.157 ms (00:02.737)<\/pre>\n<p>&nbsp;<\/p>\n<p>Il nous faut environ 2 secondes pour ramener le r\u00e9sultat de l&#8217;analyse. Rappelons que l&#8217;optimiseur parcourt l&#8217;ensemble des partitions dans la mesure on nous n&#8217;avons pas de clause WHERE.<\/p>\n<p>Voyons maintenant le plan d&#8217;ex\u00e9cution si l&#8217;on souhaite ramener le TOP 10 des plus grands &#8220;account id&#8221; et leurs &#8220;branche id&#8221; attach\u00e9s tri\u00e9s par &#8220;branche id&#8221; descendant, ainsi que les ex\u00e6quos. Au vu de la conception de notre champs &#8220;bid&#8221;(valeurs \u00e9gales par partition de la table &#8220;pgbench_accounts&#8221;), nous devrions ramener toutes les lignes de la partition 10 soit 100 000 lignes<\/p>\n<pre>manuelo=# explain analyze select aid , bid from (select aid , bid , rank() over (order by bid desc) from pgbench_accounts) as acc where rank &lt;= 10 order by bid desc;\r\nQUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------\r\nSubquery Scan on acc (cost=144732.34..174732.34 rows=333333 width=8) (actual time=4288.950..9857.908 rows=100000 loops=1)\r\nFilter: (acc.rank &lt;= 10)\r\nRows Removed by Filter: 900000\r\n-&gt; WindowAgg (cost=144732.34..162232.34 rows=1000000 width=16) (actual time=4288.946..9109.634 rows=1000000 loops=1)\r\n-&gt; Sort (cost=144732.34..147232.34 rows=1000000 width=8) (actual time=2834.834..3555.152 rows=1000000 loops=1)\r\nSort Key: pgbench_accounts_1.bid DESC\r\n<strong>Sort Method: external merge Disk: 17696kB<\/strong>\r\n-&gt; Append (cost=0.00..31400.00 rows=1000000 width=8) (actual time=0.014..2052.294 rows=1000000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_1 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.012..75.332 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_2 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.012..75.270 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_3 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.012..77.936 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_4 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.024..76.067 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_5 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.010..78.318 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_6 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.019..81.886 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_7 (cost=0.00..2640.00 rows=100000 width=8) (actual time=1.330..82.675 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_8 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.010..77.019 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_9 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.011..76.075 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_10 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.011..76.073 rows=100000 loops=1)\r\nPlanning Time: 0.232 ms\r\nExecution Time: 9924.785 ms\r\n(20 rows)\r\n\r\nTime: 9925.463 ms (00:09.925)<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous mettons un peu plus de 9 secondes pour cette op\u00e9ration, avec du tri sur disque (un peu plus de 17 Mo).<\/p>\n<p>Voyons maintenant comment cela se comporte en PostgreSQL 13<\/p>\n<h4><span style=\"color: #993366;\">PostgreSQL 13<\/span><\/h4>\n<p>Ex\u00e9cutons les m\u00eames requ\u00eates<\/p>\n<pre>manuelo=# show server_version;\r\nserver_version\r\n----------------\r\n13.0\r\n(1 row)<\/pre>\n<p>&nbsp;<\/p>\n<p>la 1ere requ\u00eate avec le TOP 10 des plus grands &#8220;account id&#8221; et leurs &#8220;branche id&#8221; attach\u00e9s tri\u00e9s par &#8220;branche id&#8221; descendant.<\/p>\n<p>&nbsp;<\/p>\n<pre>manuelo=# select aid , bid from pgbench_accounts order by bid desc fetch first 10 rows only;\r\naid | bid\r\n--------+-----\r\n900002 | 10\r\n900003 | 10\r\n900004 | 10\r\n900005 | 10\r\n900006 | 10\r\n900007 | 10\r\n900008 | 10\r\n900009 | 10\r\n900010 | 10\r\n900001 | 10\r\n(10 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>Pour le plan d&#8217;ex\u00e9cution c&#8217;est le m\u00eame qu&#8217;en 12 avec quasiment les m\u00eames perfs, soit \u00e0 peu pr\u00e8s 2 secondes d&#8217;analyse.<\/p>\n<pre>manuelo=# explain analyze select aid , bid from pgbench_accounts order by bid desc fetch first 10 rows only;\r\nQUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=34369.82..34370.98 rows=10 width=8) (actual time=2713.365..2713.483 rows=10 loops=1)\r\n-&gt; Gather Merge (cost=34369.82..131599.60 rows=833340 width=8) (actual time=2713.362..2713.465 rows=10 loops=1)\r\nWorkers Planned: 2\r\nWorkers Launched: 2\r\n-&gt; Sort (cost=33369.79..34411.47 rows=416670 width=8) (actual time=2691.259..2691.275 rows=10 loops=3)\r\nSort Key: pgbench_accounts.bid DESC\r\nSort Method: top-N heapsort Memory: 25kB\r\nWorker 0: Sort Method: top-N heapsort Memory: 25kB\r\nWorker 1: Sort Method: top-N heapsort Memory: 25kB\r\n-&gt; Parallel Append (cost=0.00..24365.70 rows=416670 width=8) (actual time=0.010..1921.334 rows=333333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_1 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.006..275.988 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_2 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.009..187.432 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_3 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.006..186.142 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_4 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.005..206.952 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_5 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.010..164.090 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_6 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.007..163.891 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_7 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.007..91.653 rows=33333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_8 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.004..140.797 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_9 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.005..184.767 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_10 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.008..165.512 rows=100000 loops=1)\r\nPlanning Time: 0.201 ms\r\nExecution Time: 2713.549 ms\r\n(22 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>Puis le TOP 10 des plus grands &#8220;account id&#8221; et leurs &#8220;branche id&#8221; attach\u00e9s ainsi que les ex\u00e6quos avec la m\u00e9thode via la fonction &#8216;ranking&#8217;.<\/p>\n<pre>manuelo=# explain analyze select aid , bid from (select aid , bid , rank() over (order by bid desc) from pgbench_accounts) as acc where rank &lt;= 10 order by bid desc;\r\nQUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------\r\nSubquery Scan on acc (cost=131057.84..161057.84 rows=333333 width=8) (actual time=2962.088..5939.797 rows=100000 loops=1)\r\nFilter: (acc.rank &lt;= 10)\r\nRows Removed by Filter: 900000\r\n-&gt; WindowAgg (cost=131057.84..148557.84 rows=1000000 width=16) (actual time=2962.085..5234.609 rows=1000000 loops=1)\r\n-&gt; Sort (cost=131057.84..133557.84 rows=1000000 width=8) (actual time=2816.923..3508.142 rows=1000000 loops=1)\r\nSort Key: pgbench_accounts.bid DESC\r\n<strong>Sort Method: external merge Disk: 17624kB<\/strong>\r\n-&gt; Append (cost=0.00..31400.00 rows=1000000 width=8) (actual time=0.011..2029.690 rows=1000000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_1 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.009..78.495 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_2 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.009..77.523 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_3 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.008..82.124 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_4 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.009..76.659 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_5 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.032..82.713 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_6 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.006..76.210 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_7 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.012..75.231 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_8 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.011..74.480 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_9 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.010..74.386 rows=100000 loops=1)\r\n-&gt; Seq Scan on pgbench_accounts_10 (cost=0.00..2640.00 rows=100000 width=8) (actual time=0.011..77.242 rows=100000 loops=1)\r\nPlanning Time: 0.216 ms\r\nExecution Time: 6006.397 ms\r\n(20 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>Les performances sont identiques \u00e0 la version 12.<br \/>\nMaintenant, utilisons la nouvelle m\u00e9thode FETCH FIRST WITH TIES pour nous retourner le m\u00eame r\u00e9sultat de donn\u00e9es.<\/p>\n<pre>manuelo=# explain analyze select aid , bid from pgbench_accounts order by bid desc fetch first 10 rows with ties;\r\nQUERY PLAN\r\n------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=34369.82..34370.98 rows=10 width=8) (actual time=2741.817..3067.730 rows=100000 loops=1)\r\n-&gt; Gather Merge (cost=34369.82..131599.60 rows=833340 width=8) (actual time=2741.814..2941.271 rows=100001 loops=1)\r\nWorkers Planned: 2\r\nWorkers Launched: 2\r\n-&gt; Sort (cost=33369.79..34411.47 rows=416670 width=8) (actual time=2714.034..2736.566 rows=34426 loops=3)\r\nSort Key: pgbench_accounts.bid DESC\r\nSort Method: quicksort Memory: 28076kB\r\n<strong>Worker 0: Sort Method: quicksort Memory: 27841kB\r\nWorker 1: Sort Method: quicksort Memory: 27824kB<\/strong>\r\n-&gt; Parallel Append (cost=0.00..24365.70 rows=416670 width=8) (actual time=0.010..1973.108 rows=333333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_1 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.006..254.568 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_2 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.011..208.208 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_3 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.007..207.545 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_4 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.007..207.043 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_5 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.005..229.658 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_6 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.005..163.504 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_7 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.008..80.356 rows=33333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_8 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.004..273.048 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_9 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.004..173.430 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_10 (cost=0.00..2228.24 rows=58824 width=8) (actual time=0.008..162.951 rows=100000 loops=1)\r\nPlanning Time: 0.196 ms\r\nExecution Time: 3129.996 ms\r\n(22 rows)<\/pre>\n<p>Nous ramenons \u00e9galement 100 000 lignes, mais avec une ex\u00e9cution en 3 secondes pour ce EXPLAIN ANALYZE au lieu des 6 secondes avec le mode ranking.<br \/>\nSur la version PostgreSQL 12, nous avions m\u00eame un temps d&#8217;ex\u00e9cution d&#8217;environ 9 secondes.<br \/>\nDe plus, on voit que tous nos tris se font avec 2 workers, et en m\u00e9moire\u00a0 &#8212; &gt; <span style=\"color: #0000ff;\">Sort Method: quicksort Memory: 28076kB<\/span><\/p>\n<p>Nous optimisons donc les IO et le temps d&#8217;ex\u00e9cution avec cette fonction.<\/p>\n<p>&nbsp;<\/p>\n<h3><span style=\"color: #008080;\">Le tri incr\u00e9mental (incremental sort), optimiser les tris dans vos jointures !<\/span><\/h3>\n<p>Autre nouveaut\u00e9 de la version 13 de PostgreSQL , le tri incr\u00e9mental. Tr\u00e8s int\u00e9ressant pour des requ\u00eates de type &#8220;TOP n&#8221; avec &#8220;order by&#8221; et &#8220;limit&#8221;, pour l&#8217;optimiseur, il n&#8217;est pas n\u00e9cessaire de rev\u00e9rifier les ensembles de tris pr\u00e9c\u00e9dents lorsque vous effectuez un tri complet de donn\u00e9es.<br \/>\nAvec un exemple concret, ce sera bien mieux :<\/p>\n<p>Soit 2 colonnes A et B, nous pourrions avoir les valeurs suivantes :<\/p>\n<pre>A\u00a0 \u00a0B\r\n--\u00a0 --\r\n25\u00a0 65\r\n25\u00a0 77678\r\n25\u00a0 90\r\n32\u00a0 89\r\n32 \u00a0176889\r\n32  7\r\n55  89\r\n55  779<\/pre>\n<p>Avec le &#8220;incremental sort&#8221;, l&#8217;optimiseur PostgreSQL va alors classer les donn\u00e9es en fonction de la colonne A pour faire des sous ensembles avec les valeurs de B, soit :<\/p>\n<pre>25,65 25,77678 25,90\r\n32,89 32,176889 32,7\r\n55,89 55,779<\/pre>\n<p>&nbsp;<\/p>\n<p>On constatera alors un gain en terme de co\u00fbt, et en IO car l&#8217;optimiseur effectuera des tris ensemblistes sur bien moins de lignes .<\/p>\n<p><strong><span style=\"color: #800080;\">PostgreSQL 12\u00a0<\/span><\/strong><\/p>\n<p>Prenons un exemple avec une jointure entre la table &#8220;pgbench_accounts&#8221; et &#8220;pgbench_branches&#8221;.<\/p>\n<pre>manuelo=# select a.aid, b.bid, a.abalance from pgbench_accounts a,pgbench_branches b \r\nwhere a.bid=b.bid order by a.aid desc,b.bid desc fetch first 10 rows only;\r\naid | bid | abalance\r\n---------+-----+----------\r\n1000000 | 10 | 0\r\n999999 | 10 | 0\r\n999998 | 10 | 0\r\n999997 | 10 | 0\r\n999996 | 10 | 0\r\n999995 | 10 | 0\r\n999994 | 10 | 0\r\n999993 | 10 | 0\r\n999992 | 10 | 0\r\n999991 | 10 | 0\r\n(10 rows)\r\n\r\nTime: 429.901 ms<\/pre>\n<p><strong>400ms<\/strong> pour l&#8217;ex\u00e9cution de cette requ\u00eate. Regardons le plan.<\/p>\n<pre>manuelo=# explain analyze select a.aid, b.bid, a.abalance from pgbench_accounts a,pgbench_branches b \r\nwhere a.bid=b.bid order by a.aid desc,b.bid desc fetch first 10 rows only;<\/pre>\n<pre>QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=35928.28..35929.45 rows=10 width=12) (actual time=4207.405..4207.509 rows=10 loops=1)\r\n-&gt; Gather Merge (cost=35928.28..133157.37 rows=833334 width=12) (actual time=4207.402..4207.479 rows=10 loops=1)\r\nWorkers Planned: 2\r\nWorkers Launched: 2\r\n-&gt; <strong>Sort (cost=34928.26..35969.92 rows=416667 width=12) (actual time=4186.212..4186.219 rows=10 loops=3)<\/strong>\r\nSort Key: a.aid DESC, b.bid DESC\r\nSort Method: top-N heapsort Memory: 25kB\r\nWorker 0: Sort Method: top-N heapsort Memory: 25kB\r\nWorker 1: Sort Method: top-N heapsort Memory: 25kB\r\n-&gt; Hash Join (cost=1.23..25924.23 rows=416667 width=12) (actual time=3.593..3388.957 rows=333333 loops=3)\r\nHash Cond: (a.bid = b.bid)\r\n-&gt; Parallel Append (cost=0.00..24365.70 rows=416670 width=12) (actual time=3.483..1976.369 rows=333333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_1 a (cost=0.00..2228.24 rows=58824 width=12) (actual time=0.007..187.719 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_2 a_1 (cost=0.00..2228.24 rows=58824 width=12) (actual time=10.429..277.525 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_3 a_2 (cost=0.00..2228.24 rows=58824 width=12) (actual time=10.436..232.550 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_4 a_3 (cost=0.00..2228.24 rows=58824 width=12) (actual time=0.005..253.731 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_5 a_4 (cost=0.00..2228.24 rows=58824 width=12) (actual time=0.012..187.681 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_6 a_5 (cost=0.00..2228.24 rows=58824 width=12) (actual time=10.865..234.133 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_7 a_6 (cost=0.00..2228.24 rows=58824 width=12) (actual time=3.713..88.257 rows=33333 loops=3)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_8 a_7 (cost=0.00..2228.24 rows=58824 width=12) (actual time=0.004..253.161 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_9 a_8 (cost=0.00..2228.24 rows=58824 width=12) (actual time=0.005..230.784 rows=100000 loops=1)\r\n-&gt; Parallel Seq Scan on pgbench_accounts_10 a_9 (cost=0.00..2228.24 rows=58824 width=12) (actual time=0.007..120.950 rows=100000 loops=1)\r\n-&gt; Hash (cost=1.10..1.10 rows=10 width=4) (actual time=0.029..0.030 rows=10 loops=3)\r\nBuckets: 1024 Batches: 1 Memory Usage: 9kB\r\n-&gt; Seq Scan on pgbench_branches b (cost=0.00..1.10 rows=10 width=4) (actual time=0.009..0.017 rows=10 loops=3)\r\nPlanning Time: 0.335 ms\r\nExecution Time: 4207.612 ms\r\n(27 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>Pour ramener les 10 lignes de r\u00e9sultat, nous effectuons un tri sur plus de 416000 lignes.<\/p>\n<p><strong><span style=\"color: #800080;\">PostgreSQL 13<\/span><\/strong><\/p>\n<p>Voyons maintenant cette m\u00eame requ\u00eate en version 13. Le plan d&#8217;ex\u00e9cution sera le suivant<\/p>\n<p>&nbsp;<\/p>\n<pre>manuelo=# explain analyze select a.aid, b.bid, a.abalance from pgbench_accounts a,pgbench_branches b \r\nwhere a.bid=b.bid order by a.aid desc,b.bid desc fetch first 10 rows only;\r\nQUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=3.12..5.42 rows=10 width=12) (actual time=0.229..0.256 rows=10 loops=1)\r\n-&gt; <strong>Incremental Sort (cost=3.12..230201.32 rows=1000000 width=12) (actual time=0.227..0.241 rows=10 loops=1)<\/strong>\r\nSort Key: a.aid DESC, b.bid DESC\r\nPresorted Key: a.aid\r\nFull-sort Groups: 1 Sort Method: quicksort Average Memory: 25kB Peak Memory: 25kB\r\n-&gt; Nested Loop (cost=2.92..185201.32 rows=1000000 width=12) (actual time=0.050..0.218 rows=11 loops=1)\r\nJoin Filter: (a.bid = b.bid)\r\nRows Removed by Join Filter: 99\r\n-&gt; Append (cost=2.92..47472.92 rows=1000000 width=12) (actual time=0.013..0.036 rows=11 loops=1)\r\n-&gt; Index Scan Backward using pgbench_accounts_10_pkey on pgbench_accounts_10 a_10 (cost=0.29..4247.29 rows=100000 width=12) (actual time=0.011..0.020 rows=11 loops=1)\r\n-&gt; Index Scan Backward using pgbench_accounts_9_pkey on pgbench_accounts_9 a_9 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_8_pkey on pgbench_accounts_8 a_8 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_7_pkey on pgbench_accounts_7 a_7 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_6_pkey on pgbench_accounts_6 a_6 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_5_pkey on pgbench_accounts_5 a_5 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_4_pkey on pgbench_accounts_4 a_4 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_3_pkey on pgbench_accounts_3 a_3 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_2_pkey on pgbench_accounts_2 a_2 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Index Scan Backward using pgbench_accounts_1_pkey on pgbench_accounts_1 a_1 (cost=0.29..4247.29 rows=100000 width=12) (never executed)\r\n-&gt; Materialize (cost=0.00..1.15 rows=10 width=4) (actual time=0.001..0.008 rows=10 loops=11)\r\n-&gt; Seq Scan on pgbench_branches b (cost=0.00..1.10 rows=10 width=4) (actual time=0.005..0.012 rows=10 loops=1)\r\nPlanning Time: 0.328 ms\r\nExecution Time: 0.308 ms\r\n(23 rows)<\/pre>\n<p>&nbsp;<\/p>\n<p>On ram\u00e8ne ces 10 lignes en effectuant du tri incr\u00e9mental sur seulement 100 000 lignes avec un temps de parsing de 0,2400ms.<\/p>\n<p>A l&#8217;ex\u00e9cution, le r\u00e9sultat est sans appel :<\/p>\n<pre>manuelo=# select a.aid, b.bid, a.abalance from pgbench_accounts a,pgbench_branches b \r\nwhere a.bid=b.bid order by a.aid desc,b.bid desc fetch first 10 rows only;\r\naid | bid | abalance\r\n---------+-----+----------\r\n1000000 | 10 | 0\r\n999999 | 10 | 0\r\n999998 | 10 | 0\r\n999997 | 10 | 0\r\n999996 | 10 | 0\r\n999995 | 10 | 0\r\n999994 | 10 | 0\r\n999993 | 10 | 0\r\n999992 | 10 | 0\r\n999991 | 10 | 0\r\n(10 rows)\r\n\r\nTime: 0.895 ms<\/pre>\n<p>&nbsp;<\/p>\n<p>Moins de 1ms alors que sur la version 12, nous \u00e9tions \u00e0 plus de 400ms.<\/p>\n<p>&nbsp;<\/p>\n<h4><span style=\"color: #008080;\">Pourquoi certains de mes index B-Tree prennent moins de place en PostgreSQL 13 ?<\/span><\/h4>\n<p>&nbsp;<\/p>\n<p>Une nouvelle fonctionnalit\u00e9 \u00e9galement apparu avec la version 13, c&#8217;est l&#8217;agr\u00e9gation des valeurs en doublons dans les index. En quelque sorte, nous ne r\u00e9p\u00e9tons pas, dans l&#8217;index, chacune des valeurs en double, nous faisons de la d\u00e9duplication !<br \/>\nVoyons comment ca marche.<\/p>\n<p>Prenons notre champs &#8220;bid&#8221; de notre table &#8220;pgbench_accounts&#8221;. Nous savons que celui ci comporte le num\u00e9ro de branche pour chaque &#8220;account id&#8221; de notre table. Nous aurons donc des valeurs qui vont se r\u00e9p\u00e9ter sur chaque partition. Dans notre exemple, nous travaillerons par partition.<\/p>\n<p><strong><span style=\"color: #800080;\">PostgreSQL 12.<\/span><\/strong><\/p>\n<p>Cr\u00e9ons 2 index sur la partition 1 de notre table &#8220;pgbench_accounts&#8221;.<br \/>\n&#8211; 1 index pour le champs &#8220;aid&#8221;<br \/>\n&#8211; 1 index pour le champs &#8220;bid&#8221;<\/p>\n<p>&nbsp;<\/p>\n<pre>manuelo=# \\d pgbench_accounts\r\nPartitioned table \"public.pgbench_accounts\"\r\nColumn | Type | Collation | Nullable | Default\r\n----------+---------------+-----------+----------+---------\r\naid | integer | | not null |\r\nbid | integer | | |\r\nabalance | integer | | |\r\nfiller | character(84) | | |\r\nPartition key: RANGE (aid)<\/pre>\n<p>&nbsp;<\/p>\n<p>On sait que pour &#8220;bid&#8221;, la valeur 1 se r\u00e9p\u00e8te pour cette partition 1<\/p>\n<pre>manuelo=# select * from pgbench_accounts_1 limit 10;\r\naid | bid | abalance | filler\r\n-----+-----+----------+--------------------------------------------------------------------------------------\r\n1 | 1 | 0 |\r\n2 | 1 | 0 |\r\n3 | 1 | 0 |\r\n4 | 1 | 0 |\r\n5 | 1 | 0 |\r\n6 | 1 | 0 |\r\n7 | 1 | 0 |\r\n8 | 1 | 0 |\r\n9 | 1 | 0 |\r\n10 | 1 | 0 |\r\n(10 rows)<\/pre>\n<p>&nbsp;<\/p>\n<pre>manuelo=# create index account_id_1 on pgbench_accounts_1(aid);\r\nCREATE INDEX\r\n\r\n\r\nmanuelo=# create index account_bid_1 on pgbench_accounts_1(bid);\r\nCREATE INDEX<\/pre>\n<p>On passe les stats sur cette partition<\/p>\n<pre>manuelo=# vacuum analyse pgbench_accounts_1;\r\nVACUUM<\/pre>\n<p>Voyons maintenant les tailles de chacun de ces index<\/p>\n<pre>manuelo=# select pg_table_size('account_id_1')\/1024 as \"Taille aid Ko\";\r\nTaille aid Ko\r\n-----------\r\n2208\r\n\r\n\r\nmanuelo=# select pg_table_size('account_bid_1')\/1024 as \"Taille bid Ko\";\r\nTaille bid Ko\r\n-------------\r\n2224<\/pre>\n<p>Nous avons donc 2 index sur cette table, champs &#8220;aid&#8221; et &#8220;bid&#8221;, qui font \u00e0 peu pr\u00e8s la m\u00eame taille, soit 2Mo.<\/p>\n<p>&nbsp;<\/p>\n<p><strong><span style=\"color: #800080;\">PostgreSQL 13<\/span><\/strong><\/p>\n<p>Effectuons la m\u00eame op\u00e9ration sur PostgreSQL 13<\/p>\n<p>&nbsp;<\/p>\n<pre>manuelo=# create index account_bid_1 on pgbench_accounts_1(bid);\r\nCREATE INDEX\r\n\r\n\r\nmanuelo=# create index account_id_1 on pgbench_accounts_1(aid);\r\nCREATE INDEX\r\n\r\n\r\nmanuelo=# vacuum analyse pgbench_accounts_1;\r\nVACUUM<\/pre>\n<p>Regardons les tailles de chacun des index<\/p>\n<pre>manuelo=# select pg_table_size('account_id_1')\/1024 as \"Taille aid Ko\";\r\nTaille aid Ko\r\n---------------\r\n2208\r\n(1 row)\r\n\r\nmanuelo=# select pg_table_size('account_bid_1')\/1024 as \"Taille bid Ko\";\r\nTaille bid Ko\r\n---------------\r\n696\r\n(1 row)<\/pre>\n<p>&nbsp;<\/p>\n<p>On voit que l&#8217;index sur &#8220;aid&#8221; fait toujours environ 2Mo, en effet, celui ci travaille sur des valeurs distinctes car c&#8217;est une contrainte unique sur ce champs.<\/p>\n<p>En revanche, pour &#8220;bid&#8221;, comme nous n&#8217;avons que des 1 en valeur, celui ci ne fait plus que 600ko, PostgreSQL sait agr\u00e9ger la valeur 1 et la d\u00e9duplique afin d&#8217;optimiser la place disque.<\/p>\n<p>&nbsp;<\/p>\n<h4><span style=\"color: #008080;\">V\u00e9rifier l&#8217;int\u00e9grit\u00e9 de ses backup<\/span><\/h4>\n<p>&nbsp;<\/p>\n<p>Dans vos scripts de maintenance PostgreSQL, vous utilisez tr\u00e8s certainement des appels aux outils tels &#8220;pg_basebackup&#8221; et &#8220;pg_archivecleanup&#8221; pour g\u00e9rer les sauvegardes de bases et des WALs associ\u00e9s.<br \/>\nIl faudra maintenant compter sur un nouvel outil pour v\u00e9rifier l&#8217;int\u00e9grit\u00e9 de vos sauvegardes, qui est &#8220;pg_verifybackup&#8221;.<\/p>\n<p>L&#8217;outil &#8220;pg_verifybackup&#8221; va effectuer des op\u00e9rations de checksum dans le backup \u00e0 la mani\u00e8re d&#8217;un RESTORE VERIFY ONLY pour SQL Server. C&#8217;est pour cela qu&#8217;il est n\u00e9cessaire de faire un backup via &#8220;pg_basebackup&#8221; en mode &#8220;plain&#8221; et non pas tar.<\/p>\n<p>Effectuons un backup de notre instance PostgreSQL 13, mode &#8220;plain&#8221;, avec les WALs en mode fetch.<\/p>\n<p>&nbsp;<\/p>\n<pre>[postgres@pg13]$ mkdir -p \/data\/postgres\/backup\/bkp_PG13\r\n[postgres@pg13]$ pg_basebackup -D \/data\/postgres\/backup\/bkp_PG13 -Fp -Xf -v -P\r\npg_basebackup: initiating base backup, waiting for checkpoint to complete\r\npg_basebackup: checkpoint completed\r\npg_basebackup: write-ahead log start point: 0\/B4000028 on timeline 2\r\n204392\/204392 kB (100%), 1\/1 tablespace\r\npg_basebackup: write-ahead log end point: 0\/B4000100\r\npg_basebackup: syncing data to disk ...\r\npg_basebackup: renaming backup_manifest.tmp to backup_manifest\r\npg_basebackup: base backup completed\r\n\r\n<\/pre>\n<p>S&#8217;il l&#8217;on regarde le contenu du r\u00e9pertoire, nous avons les fichiers PostgreSQL classiques comme nous savons le trouver dans un backup en mode &#8220;plain&#8221;.<br \/>\nNous trouvons maintenant un fichier nomm\u00e9 &#8220;backup_manifest&#8221;.<\/p>\n<p>Ce fichier est \u00e9ditable, nous avons toutes les informations sur les diff\u00e9rents fichiers composants notre instance. Le nom du fichier, la date de modification, l&#8217;algorithme de checksum, sa taille, et la valeur du checksum.<br \/>\nVoici un extrait de celui ci :<\/p>\n<pre>[postgres@PG13]$ head -n 15 backup_manifest\r\n{ \"PostgreSQL-Backup-Manifest-Version\": 1,\r\n\"Files\": [\r\n{ \"Path\": \"backup_label\", \"Size\": 226, \"Last-Modified\": \"2020-10-16 15:59:48 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"ea4e7ad9\" },\r\n{ \"Path\": \"current_logfiles\", \"Size\": 30, \"Last-Modified\": \"2020-10-16 09:19:39 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"f6838adf\" },\r\n{ \"Path\": \"pg_multixact\/members\/0000\", \"Size\": 8192, \"Last-Modified\": \"2020-10-12 16:20:35 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"23464490\" },\r\n{ \"Path\": \"pg_multixact\/offsets\/0000\", \"Size\": 8192, \"Last-Modified\": \"2020-10-16 09:24:39 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"23464490\" },\r\n{ \"Path\": \"base\/16403\/16576\", \"Size\": 8192, \"Last-Modified\": \"2020-10-12 16:56:22 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"4fbeb31c\" },\r\n{ \"Path\": \"base\/16403\/16659\", \"Size\": 2260992, \"Last-Modified\": \"2020-10-13 15:41:00 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"0fc9222c\" },\r\n{ \"Path\": \"base\/16403\/16461\", \"Size\": 16384, \"Last-Modified\": \"2020-10-13 15:44:15 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"0e627e6f\" },\r\n{ \"Path\": \"base\/16403\/3381\", \"Size\": 0, \"Last-Modified\": \"2020-10-12 16:20:35 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"00000000\" },\r\n{ \"Path\": \"base\/16403\/16475\", \"Size\": 16384, \"Last-Modified\": \"2020-10-12 16:55:58 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"ccdbb572\" },\r\n{ \"Path\": \"base\/16403\/16489\", \"Size\": 8192, \"Last-Modified\": \"2020-10-12 16:56:01 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"82a0e924\" },\r\n{ \"Path\": \"base\/16403\/2602_fsm\", \"Size\": 24576, \"Last-Modified\": \"2020-10-12 16:20:35 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"5786bbba\" },\r\n{ \"Path\": \"base\/16403\/16648_fsm\", \"Size\": 24576, \"Last-Modified\": \"2020-10-13 15:41:02 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"9e571b03\" },\r\n{ \"Path\": \"base\/16403\/16497\", \"Size\": 16384, \"Last-Modified\": \"2020-10-13 15:44:16 GMT\", \"Checksum-Algorithm\": \"CRC32C\", \"Checksum\": \"bb227b8f\" },<\/pre>\n<p>&nbsp;<\/p>\n<p>Ceci ressemble fortement \u00e0 un formatage JSON qui sera exploit\u00e9 par &#8220;pg_verifybackup&#8221;, afin de valider notre sauvegarde.<br \/>\nOn lance la commande :<\/p>\n<p>&nbsp;<\/p>\n<pre>[postgres@pg13]$ pg_verifybackup -e \/data\/postgres\/backup\/bkp_PG13\r\n\r\n<\/pre>\n<p>Attention, il est possible que vous ayez une erreur lors de la v\u00e9rification des WALs, en effet, &#8220;pg_verifybackup&#8221; s&#8217;appuie sur l&#8217;outil &#8220;pg_waldump&#8221; pour chercher les WALs dans la sauvegarde, selon la Timeline en cours.<br \/>\nCeci pourra se v\u00e9rifier dans le fichier manifest :<\/p>\n<pre>\"WAL-Ranges\": [\r\n{ \"Timeline\": 2, \"Start-LSN\": \"0\/9B0000A0\", \"End-LSN\": \"0\/C1000100\" }\r\n],<\/pre>\n<p>&nbsp;<\/p>\n<p>L&#8217;erreur que vous pourriez rencontr\u00e9 est la suivante :<\/p>\n<pre>pg_waldump: fatal: could not find file \"00000002000000000000009B\": No such file or directory\r\npg_verifybackup: error: WAL parsing failed for timeline 2<\/pre>\n<p>D&#8217;autant plus, si vous \u00eates en mode &#8220;archive_mode&#8221; avec un &#8220;archive_command&#8221; renseign\u00e9 qui envoie les WALs vers un r\u00e9pertoire d&#8217;archive.<br \/>\nEn effet, &#8220;pg_basebackup&#8221; ne sauvegarde pas les archives envoy\u00e9s vers un r\u00e9pertoire autre que ce qu&#8217;il y&#8217;a dans &#8220;pg_wal&#8221;.<br \/>\nSi vous souhaitez \u00e9viter de contr\u00f4ler tous les WALs li\u00e9s \u00e0 cette &#8216;timeline&#8217; courante, il sera possible de passer la commande avec l&#8217;option &#8220;-n&#8221; :<\/p>\n<pre>[postgres@ip-172-44-2-98 pg_wal]$ pg_verifybackup --help | grep \"wal\"\r\n-n, --no-parse-wal do not try to parse WAL files<\/pre>\n<p>&nbsp;<\/p>\n<pre>[postgres@ip-172-44-2-98 pg_wal]$ pg_verifybackup -e \/data\/postgres\/backup\/bkp_PG13 -n\r\n<span style=\"color: #339966;\">backup successfully verified<\/span><\/pre>\n<p>&nbsp;<\/p>\n<p>Merci \u00e0 vous !<\/p>\n<p>N&#8217;h\u00e9sitez pas \u00e0 laisser un commentaire.<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8321&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8321&#038;title=PostgreSQL%2013%20%3A%20pr%C3%A9sentation\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%2013%20%3A%20pr%C3%A9sentation&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8321\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>&nbsp; &nbsp; &nbsp; La nouvelle release officielle de PostgreSQL est sorti le 24 septembre 2020. Place \u00e0 la version 13. Celle ci est bien entendu, t\u00e9l\u00e9chargeable sur le site PostgreSQL.org sur lequel vous pourrez retrouver les sources, les derniers packages.&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":32,"featured_media":8349,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[266],"tags":[],"class_list":["post-8321","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-postgresql"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>PostgreSQL 13 : pr\u00e9sentation - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PostgreSQL 13 : pr\u00e9sentation - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"&nbsp; &nbsp; &nbsp; La nouvelle release officielle de PostgreSQL est sorti le 24 septembre 2020. Place \u00e0 la version 13. Celle ci est bien entendu, t\u00e9l\u00e9chargeable sur le site PostgreSQL.org sur lequel vous pourrez retrouver les sources, les derniers packages.&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2020-10-30T11:51:42+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2020-10-30T11:51:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/images-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"216\" \/>\n\t<meta property=\"og:image:height\" content=\"234\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Emmanuel RAMI\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emmanuel RAMI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"25 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\"},\"author\":{\"name\":\"Emmanuel RAMI\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\"},\"headline\":\"PostgreSQL 13 : pr\u00e9sentation\",\"datePublished\":\"2020-10-30T11:51:42+00:00\",\"dateModified\":\"2020-10-30T11:51:35+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\"},\"wordCount\":1869,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"articleSection\":[\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\",\"name\":\"PostgreSQL 13 : pr\u00e9sentation - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2020-10-30T11:51:42+00:00\",\"dateModified\":\"2020-10-30T11:51:35+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PostgreSQL 13 : pr\u00e9sentation\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\",\"name\":\"Emmanuel RAMI\",\"sameAs\":[\"https:\/\/blog.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"PostgreSQL 13 : pr\u00e9sentation - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/","og_locale":"fr_FR","og_type":"article","og_title":"PostgreSQL 13 : pr\u00e9sentation - Capdata TECH BLOG","og_description":"&nbsp; &nbsp; &nbsp; La nouvelle release officielle de PostgreSQL est sorti le 24 septembre 2020. Place \u00e0 la version 13. Celle ci est bien entendu, t\u00e9l\u00e9chargeable sur le site PostgreSQL.org sur lequel vous pourrez retrouver les sources, les derniers packages.&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2020-10-30T11:51:42+00:00","article_modified_time":"2020-10-30T11:51:35+00:00","og_image":[{"width":216,"height":234,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/10\/images-1.jpg","type":"image\/jpeg"}],"author":"Emmanuel RAMI","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Emmanuel RAMI","Dur\u00e9e de lecture estim\u00e9e":"25 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/"},"author":{"name":"Emmanuel RAMI","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae"},"headline":"PostgreSQL 13 : pr\u00e9sentation","datePublished":"2020-10-30T11:51:42+00:00","dateModified":"2020-10-30T11:51:35+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/"},"wordCount":1869,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"articleSection":["PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/","url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/","name":"PostgreSQL 13 : pr\u00e9sentation - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2020-10-30T11:51:42+00:00","dateModified":"2020-10-30T11:51:35+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-13-les-nouveautes-interessantes\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"PostgreSQL 13 : pr\u00e9sentation"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae","name":"Emmanuel RAMI","sameAs":["https:\/\/blog.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=8321"}],"version-history":[{"count":28,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8321\/revisions"}],"predecessor-version":[{"id":10589,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8321\/revisions\/10589"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/8349"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=8321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=8321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=8321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}