{"id":8715,"date":"2021-06-18T23:20:57","date_gmt":"2021-06-18T22:20:57","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=8715"},"modified":"2021-06-18T23:20:57","modified_gmt":"2021-06-18T22:20:57","slug":"etude-paas-postgresql-13-sur-gcp","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/","title":{"rendered":"Etude PaaS PostgreSQL 13 sur GCP"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8715&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8715&#038;title=Etude%20PaaS%20PostgreSQL%2013%20sur%20GCP\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=Etude%20PaaS%20PostgreSQL%2013%20sur%20GCP&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8715\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p>Dans le m\u00eame esprit que ce qui avait \u00e9t\u00e9 fait pour MySQL <a href=\"https:\/\/blog.capdata.fr\/index.php\/comparatif-mysql-dans-le-paas-episode-1-google-cloud-sql\/\">dans cet \u00e9pisode<\/a>, nous allons balayer les possibilit\u00e9s qu&#8217;offre la solution <em>Cloud SQL PostgreSQL <\/em> du cloud de Moutain View. <\/p>\n<p>Pour la revue du d\u00e9cor (briques de service, organisation territoriale des r\u00e9gions, tiers et stockage, etc&#8230;) se r\u00e9f\u00e9rer \u00e0 l&#8217;article pr\u00e9-cit\u00e9. Ces notions s&#8217;appliquent aux 3 PaaS Cloud SQL que sont MySQL, PostgreSQL et le petit dernier SQL Server (la GA est sortie <a href=\"https:\/\/cloud.google.com\/sql\/docs\/release-notes\">en f\u00e9vrier l&#8217;ann\u00e9e derni\u00e8re<\/a>), donc on ne reviendra pas dessus. Dans cet article, nous allons surtout voir quelles peuvent \u00eatre les diff\u00e9rences notables d&#8217;utilisation de PostgreSQL entre une instance <em>on-prem <\/em>et une instance Cloud SQL. <\/p>\n<h1>Versions, limitations et extensions support\u00e9es:<\/h1>\n<p>Pour ce qui est des versions, Cloud SQL supporte PostgreSQL 13 depuis novembre 2020, et \u00e9tait le premier clouder majeur \u00e0 adopter la derni\u00e8re release PostgreSQL puisqu&#8217;Amazon n&#8217;a annonc\u00e9 la disponibilit\u00e9 <a href=\"https:\/\/aws.amazon.com\/about-aws\/whats-new\/2021\/02\/amazon-rds-now-supports-postgresql-13\/\">qu&#8217;en f\u00e9vrier de cette ann\u00e9e<\/a>, et Azure ne va <a href=\"https:\/\/docs.microsoft.com\/en-us\/azure\/postgresql\/concepts-supported-versions\">pas au del\u00e0 de la version 11 en Single Server<\/a> encore aujourd&#8217;hui (toutefois 12 et 13 sont disponible en Flexible et Hyperscale). <\/p>\n<p><a href=\"https:\/\/cloud.google.com\/sql\/docs\/postgres\/extensions\">47 extensions<\/a> sont support\u00e9es parmi lesquelles les principales : Gin, GiST, hstore, intarray, pgaudit, pgcrypto, pgstattuple, l&#8217;incontournable pg_stat_statements et \u00e9videmment plpgsql. A noter que CREATE EXTENSION est la seule commande qui n\u00e9cessite les privil\u00e8ges SUPERUSER qui soit autoris\u00e9e. <\/p>\n<p>Pour avoir l&#8217;exhaustivit\u00e9:<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=&gt; select setting from pg_settings where name='cloudsql.supported_extensions' ;\r\n-[ RECORD 1 ]---------------------------------------------------------------------------------------\r\nsetting | address_standardizer:3.0.2, address_standardizer_data_us:3.0.2, bloom:1.0, btree_gin:1.3, \r\nbtree_gist:1.5, citext:1.6, cube:1.4, dblink:1.2, dict_int:1.0, dict_xsyn:1.0, earthdistance:1.1,\r\n fuzzystrmatch:1.1, hll:2.14, hstore:1.7, intagg:1.1, intarray:1.3, ip4r:2.4, isn:1.2, lo:1.1, \r\nltree:1.2, pageinspect:1.8, pg_buffercache:1.3, pg_freespacemap:1.2, pg_partman:4.4.0, \r\npg_prewarm:1.2, pg_repack:1.4.6, pg_similarity:1.0, pg_stat_statements:1.8, pg_trgm:1.5,\r\n pg_visibility:1.2, pgaudit:1.5, pgcrypto:1.3, pgfincore:1.2, pglogical:2.3.3, pgrowlocks:1.2, \r\npgstattuple:1.5, pgtap:1.1.0, plpgsql:1.0, plproxy:2.10.0, postgis:3.0.2, postgis_raster:3.0.2,\r\n postgis_sfcgal:3.0.2, postgis_tiger_geocoder:3.0.2, postgis_topology:3.0.2, postgres_fdw:1.0,\r\n prefix:1.2.0, sslinfo:1.2, tablefunc:1.0, tsm_system_rows:1.0, tsm_system_time:1.0, \r\nunaccent:1.1, uuid-ossp:1.1\r\n<\/pre>\n<p>Pour activer une extension, par exemple <a href=\"https:\/\/www.postgresql.org\/docs\/current\/pgstatstatements.html\">pg_stat_statements<\/a>:<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=&gt; CREATE EXTENSION pg_stat_statements ;\r\nCREATE EXTENSION\r\n\r\npostgres=&gt; select * from pg_extension ;\r\n-[ RECORD 1 ]--+-------------------\r\noid            | 14026\r\nextname        | plpgsql\r\nextowner       | 10\r\nextnamespace   | 11\r\nextrelocatable | f\r\nextversion     | 1.0\r\nextconfig      |\r\nextcondition   |\r\n-[ RECORD 2 ]--+-------------------\r\noid            | 16444\r\nextname        | pg_stat_statements\r\nextowner       | 16389\r\nextnamespace   | 2200\r\nextrelocatable | t\r\nextversion     | 1.8\r\nextconfig      |\r\nextcondition   |\r\n\r\n<\/pre>\n<p>Et configurer le nombre max de requ\u00eates retenues dans la vue:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nC:\\Program Files (x86)\\Google\\Cloud SDK&gt;gcloud sql instances patch cloudpg13 \\\r\n  --database-flags pg_stat_statements.max=10000\r\nThe following message will be used for the patch API method.\r\n{&quot;name&quot;: &quot;cloudpg13&quot;, &quot;project&quot;: &quot;paas-postgresql-mysql&quot;, &quot;settings&quot;: {&quot;databaseFlags&quot;: [{&quot;name&quot;: &quot;pg_stat_statements.max&quot;, &quot;value&quot;: &quot;10000&quot;}]}}\r\nWARNING: This patch modifies database flag values, which may require\r\nyour instance to be restarted. Check the list of supported flags -\r\nhttps:\/\/cloud.google.com\/sql\/docs\/postgres\/flags - to see if your\r\ninstance will be restarted when this patch is submitted.\r\n\r\nDo you want to continue (Y\/n)?  Y\r\n\r\nPatching Cloud SQL instance...done.\r\nUpdated [https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13].\r\n<\/pre>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=&gt; select query, total_plan_time, total_plan_time, total_exec_time, calls, shared_blks_hit\r\npostgres-&gt; from  pg_stat_statements order by  total_exec_time limit 10 ;\r\n-[ RECORD 1 ]---+------------------------------------------------------------------------------------\r\nquery           | show max_connections\r\ntotal_plan_time | 0\r\ntotal_plan_time | 0\r\ntotal_exec_time | 0.008291\r\ncalls           | 1\r\nshared_blks_hit | 0\r\n-[ RECORD 2 ]---+------------------------------------------------------------------------------------\r\nquery           | show max_wal_size\r\ntotal_plan_time | 0\r\ntotal_plan_time | 0\r\ntotal_exec_time | 0.008558\r\ncalls           | 1\r\nshared_blks_hit | 0\r\n(...)\r\n<\/pre>\n<p>Quant aux limitations, on peut retrouver notamment:<br \/>\n&#8211; L&#8217;indisponibilit\u00e9 de LLVM\/JiT et de la r\u00e9plication logique pour les versions 12 et 13 pour l&#8217;instant.<br \/>\n&#8211; Toute commande n\u00e9cessitant un acc\u00e8s aux privil\u00e8ges SUPERUSER : ALTER SYSTEM, COPY (hors stdin), op\u00e9rations sur les backends, pg_switch_wal(), pg_reload_conf(), op\u00e9rations externes sur les fichiers, etc&#8230; la liste exhaustive se trouve <a href=\"https:\/\/www.postgresql.org\/docs\/13\/functions-admin.html\">dans la doc PostgreSQL<\/a>. <\/p>\n<h1>Surface de param\u00e9trage:<\/h1>\n<p>Les param\u00e8tres de configuration de PostgreSQL sont configurables via les <a href=\"https:\/\/cloud.google.com\/sql\/docs\/postgres\/flags\">Database Flags<\/a>. Comme ALTER SYSTEM est d\u00e9sactiv\u00e9 il faudra passer soit par la console, soit la ligne de commande gloud, ou une API REST. Il existe 134 Database Flags en version POSTGRESQL_13:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql flags list \\\r\n  --database-version=POSTGRES_13 \\\r\n  --sort-by=NAME\r\nNAME                                           TYPE             DATABASE_VERSION                                              ALLOWED_VALUES\r\nautovacuum                                     BOOLEAN          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\nautovacuum_analyze_scale_factor                FLOAT            POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\nautovacuum_analyze_threshold                   INTEGER          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\nautovacuum_freeze_max_age                      INTEGER          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\n(...)\r\nvacuum_freeze_table_age                        INTEGER          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\nvacuum_multixact_freeze_min_age                INTEGER          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\nvacuum_multixact_freeze_table_age              INTEGER          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\nwork_mem                                       INTEGER          POSTGRES_9_6,POSTGRES_10,POSTGRES_11,POSTGRES_12,POSTGRES_13\r\n<\/pre>\n<p>Attention \u00e0 l&#8217;amalgame: un select sur pg_settings remontera nettement plus de param\u00e8tres que de flags configurables, ils ne seront tout simplement pas modifiables sur GCP. Par exemple s&#8217;il est possible de modifier <em>work_mem <\/em>:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql instances patch cloudpg13 \\\r\n  --database-flags work_mem=128\r\nThe following message will be used for the patch API method.\r\n{&quot;name&quot;: &quot;cloudpg13&quot;, &quot;project&quot;: &quot;paas-postgresql-mysql&quot;, &quot;settings&quot;: {&quot;databaseFlags&quot;: [{&quot;name&quot;: &quot;work_mem&quot;, &quot;value&quot;: &quot;128&quot;}]}}\r\nWARNING: This patch modifies database flag values, which may require\r\nyour instance to be restarted. Check the list of supported flags -\r\nhttps:\/\/cloud.google.com\/sql\/docs\/postgres\/flags - to see if your\r\ninstance will be restarted when this patch is submitted.\r\n\r\nDo you want to continue (Y\/n)?  Y\r\n\r\nPatching Cloud SQL instance...done.\r\nUpdated [https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13].\r\n\r\n$ gcloud sql instances describe cloudpg13\r\n(...)\r\n  databaseFlags:\r\n  - name: work_mem\r\n    value: '128'\r\n<\/pre>\n<p>On ne peut en revanche pas toucher \u00e0 d&#8217;autres param\u00e8tres comme <em>shared_buffers <\/em>&#8230;<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">postgres=&gt; show shared_buffers;\r\n shared_buffers\r\n----------------\r\n 1229MB\r\n(1 row)\r\n\r\npostgres=&gt; alter system set shared_buffers to '1400MB' ;\r\nERROR:  must be superuser to execute ALTER SYSTEM command\r\n\r\n<\/pre>\n<p>&#8230;ou <em>max_connections <\/em>qui sont directement associ\u00e9s au type de tiers disponible:<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/maxconnections.png\" alt=\"\" width=\"819\" height=\"436\" class=\"aligncenter size-full wp-image-8737\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/maxconnections.png 819w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/maxconnections-300x160.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/maxconnections-768x409.png 768w\" sizes=\"auto, (max-width: 819px) 100vw, 819px\" \/><\/p>\n<h1>Connectivit\u00e9:<\/h1>\n<p>Google recommande d&#8217;utilisation <a href=\"https:\/\/cloud.google.com\/sql\/docs\/postgres\/sql-proxy\">cloud SQL Proxy<\/a> pour se connecter de mani\u00e8re s\u00e9curis\u00e9e sur les PaaS Cloud SQL. Il s&#8217;agit d&#8217;un middleware ex\u00e9cut\u00e9 du c\u00f4t\u00e9 applicatif, qui s&#8217;interconnecte avec une autre couche middleware c\u00f4t\u00e9 PaaS:<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudsqlproxy-1024x416.png\" alt=\"\" width=\"640\" height=\"260\" class=\"aligncenter size-large wp-image-8735\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudsqlproxy-1024x416.png 1024w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudsqlproxy-300x122.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudsqlproxy-768x312.png 768w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudsqlproxy.png 1111w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><br \/>\nIl faudra r\u00e9cup\u00e9rer <a href=\"https:\/\/cloud.google.com\/sql\/docs\/postgres\/sql-proxy\">le programme client<\/a> en fonction du type de client (Linux, windows,etc&#8230;). <\/p>\n<p>Cloud SQL Proxy se connecte au PaaS avec un compte de service d\u00e9clar\u00e9 dans la partie IAM de gestion des identit\u00e9s de GCP, et ce compte doit disposer au moins du privil\u00e8ge <em>cloudsql.instances.connect<\/em>, qui peut \u00eatre obtenu via les r\u00f4les Cloud SQL Admin, Editor ou Client selon ce qui est requis pour l&#8217;application. Une cl\u00e9 au format JSON (<em>-credential_file<\/em>) est g\u00e9n\u00e9r\u00e9e et doit \u00eatre d\u00e9pos\u00e9e sur la machine cliente depuis laquelle le programme Cloud SQL Proxy sera ex\u00e9cut\u00e9:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">$ cloud_sql_proxy_x64 \\\r\n  -credential_file=paas-postgresql-mysql-0a7b74d61a31.json \\\r\n  -instances=&quot;paas-postgresql-mysql:europe-west1:cloudpg13=tcp:5432&quot;\r\n2021\/06\/05 15:39:12 using credential file for authentication; email=xxxxxxxxxxxx@paas-postgresql-mysql.iam.gserviceaccount.com\r\n2021\/06\/05 15:39:13 Listening on 127.0.0.1:5432 for paas-postgresql-mysql:europe-west1:cloudpg13\r\n2021\/06\/05 15:39:13 Ready for new connections\r\n2021\/06\/05 15:39:24 New connection for &quot;paas-postgresql-mysql:europe-west1:cloudpg13&quot;\r\n<\/pre>\n<p>Une fois le proxy lanc\u00e9 (il est conseill\u00e9 de l&#8217;int\u00e9grer dans un service type <em>systemd<\/em>, ou dans un container), on peut ensuite lancer la connexion via un psql classique ou un autre programme client, redirig\u00e9 vers le proxy qui tourne en localhost:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">$ psql &quot;host=127.0.0.1 sslmode=disable dbname=postgres user=postgres&quot;\r\nPassword for user postgres:\r\npsql (13.3, server 13.2)\r\npostgres=&gt; select version() ;\r\n                                         version\r\n------------------------------------------------------------------------------------------\r\n PostgreSQL 13.2 on x86_64-pc-linux-gnu, compiled by Debian clang version 10.0.1 , 64-bit\r\n(1 row)\r\n<\/pre>\n<p>Ca peut para\u00eetre lourd, mais cela permet de centraliser les acc\u00e8s aux PaaS Cloud SQL d&#8217;un m\u00eame Project avec un seul service Cloud SQL Proxy. Evidemment ce mode supporte l&#8217;utilisation de certificats SSL ce qui sera conseill\u00e9 si les acc\u00e8s clients sont initi\u00e9s depuis l&#8217;ext\u00e9rieur et sans VPN. <\/p>\n<h1>Sauvegarde et restauration:<\/h1>\n<p>Deux types de sauvegardes disponibles, <strong>automatiques <\/strong>et <strong>\u00e0 la demande<\/strong> ou <em>manuels<\/em>. <\/p>\n<h2>Sauvegardes automatiques<\/h2>\n<p>Les backups automatiques sont activ\u00e9s au niveau du PaaS, soit \u00e0 la cr\u00e9ation soit plus tard via une modification de param\u00e8tre instance. Ils sont ex\u00e9cut\u00e9s lors de fen\u00eatres de maintenance de 4 heures, et sont conserv\u00e9s soit sur une p\u00e9riode soit en fonction d&#8217;un nombre de backups (7 par d\u00e9faut, max 365):<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/autobackup-1.png\" alt=\"\" width=\"745\" height=\"318\" class=\"aligncenter size-full wp-image-8741\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/autobackup-1.png 745w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/autobackup-1-300x128.png 300w\" sizes=\"auto, (max-width: 745px) 100vw, 745px\" \/><\/p>\n<p>A noter que leur localisation est \u00e9galement param\u00e9trable, si on souhaite favoriser une r\u00e9gion plut\u00f4t qu&#8217;une autre en fonction de contraintes l\u00e9gales. L&#8217;activation de la restauration en Point-In-Time active la sauvegarde des WALs en plus, avec une r\u00e9tention en jours (7 par d\u00e9faut). Les journaux ant\u00e9rieurs au dernier backup complet effectu\u00e9 sont automatiquement supprim\u00e9s. <\/p>\n<h2>Sauvegardes \u00e0 la demande<\/h2>\n<p>Il est aussi possible d&#8217;effectuer des backups \u00e0 la demande avec gcloud (ou la console ou API REST):<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql backups create --instance=cloudpg13 --location=europe-west1 --description=MANUAL\r\nBacking up Cloud SQL instance...done.\r\n[https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13] backed up.\r\n\r\n$ gcloud sql backups list --instance=cloudpg13\r\nID             WINDOW_START_TIME              ERROR  STATUS      INSTANCE\r\n1623055883991  2021-06-07T08:51:23.991+00:00  -      SUCCESSFUL  cloudpg13\r\n1623055751008  2021-06-07T08:49:11.008+00:00  -      SUCCESSFUL  cloudpg13\r\n1622640837128  2021-06-02T13:33:57.128+00:00  -      SUCCESSFUL  cloudpg13\r\n\r\n$ gcloud sql backups list --instance=cloudpg13 --filter=&quot;type=AUTOMATED&quot;\r\nID             WINDOW_START_TIME              ERROR  STATUS      INSTANCE\r\n1622640837128  2021-06-02T13:33:57.128+00:00  -      SUCCESSFUL  cloudpg13\r\n\r\n$ gcloud sql backups list --instance=cloudpg13 --filter=&quot;type=ON_DEMAND&quot;\r\nID             WINDOW_START_TIME              ERROR  STATUS      INSTANCE\r\n1623055883991  2021-06-07T08:51:23.991+00:00  -      SUCCESSFUL  cloudpg13\r\n1623055751008  2021-06-07T08:49:11.008+00:00  -      SUCCESSFUL  cloudpg13\r\n<\/pre>\n<p>Sinon, un DESCRIBE sur chaque BACKUP_ID permettra de faire la distinction :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql backups describe 1623055883991 --instance=cloudpg13\r\nbackupKind: SNAPSHOT\r\ndescription: MANUAL\r\nendTime: '2021-06-07T08:52:04.569Z'\r\nenqueuedTime: '2021-06-07T08:51:23.991Z'\r\nid: '1623055883991'\r\ninstance: cloudpg13\r\nkind: sql#backupRun\r\nlocation: europe-west1\r\nselfLink: https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13\/backupRuns\/1623055883991\r\nstartTime: '2021-06-07T08:51:23.992Z'\r\nstatus: SUCCESSFUL\r\ntype: ON_DEMAND\r\nwindowStartTime: '2021-06-07T08:51:23.991Z'\r\n\r\n$ gcloud sql backups describe 1622640837128 --instance=cloudpg13\r\nbackupKind: SNAPSHOT\r\nendTime: '2021-06-02T13:35:18.283Z'\r\nenqueuedTime: '2021-06-02T13:33:57.128Z'\r\nid: '1622640837128'\r\ninstance: cloudpg13\r\nkind: sql#backupRun\r\nlocation: eu\r\nselfLink: https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13\/backupRuns\/1622640837128\r\nstartTime: '2021-06-02T13:33:57.136Z'\r\nstatus: SUCCESSFUL\r\ntype: AUTOMATED\r\nwindowStartTime: '2021-06-02T13:33:57.128Z'\r\n<\/pre>\n<h2>Exports de donn\u00e9es<\/h2>\n<p>Mais qu&#8217;ils soient automatiques ou manuels, <strong>ils ne sont pas exportables<\/strong>, c&#8217;est \u00e0 dire que l&#8217;on ne peut pas r\u00e9cup\u00e9rer les fichiers r\u00e9sultants d&#8217;un backup. Et comme lorsque l&#8217;on supprime un PaaS, tous les backups sont supprim\u00e9s, il faudra pouvoir exporter les donn\u00e9es pour se donner une solution de retour arri\u00e8re en cas de probl\u00e8me, soit via un client pg_dump classique :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ pg_dump --host=127.0.0.1 --username=postgres --dbname=dvdrental &gt; backup.dmp\r\nPassword:\r\n<\/pre>\n<p>&#8230; soit via gcloud dans un bucket GS (qui ne sortira ni fonction, ni proc\u00e9dure stock\u00e9e, ni trigger contrairement \u00e0 pg_dump), le compte de service du PaaS devra disposer des privil\u00e8ges <em>storage.legacyBucketWriter<\/em> et <em>storage.legacyObjectReader<\/em> sur le bucket en question:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n# Creation du bucket\r\n$ gsutil mb -c NEARLINE -l europe-west1 -b on gs:\/\/cloudpg13backups\r\nCreating gs:\/\/cloudpg13backups\/...\r\n\r\n$ gsutil ls\r\ngs:\/\/cloudpg13backups\/\r\n\r\n# Attribution des droits au compte de service\r\n$ gsutil iam ch serviceAccount:xxxxxxxxxxx-xxxxxxxx@gcp-sa-cloud-sql.iam.gserviceaccount.com:legacyBucketWriter,legacyObjectReader gs:\/\/cloudpg13backups\r\n\r\n# Export des donn\u00e9es dans le bucket\r\n$ gcloud sql export sql cloudpg13 gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz --database=dvdrental\r\nExporting Cloud SQL instance...done.\r\nExported [https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13] to [gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz].\r\n\r\n$ gsutil ls gs:\/\/cloudpg13backups\r\ngs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz\r\n\r\n# Et r\u00e9cup\u00e9ration en local ...\r\n$ gsutil cp gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz cloudpg13.dvdrental.dmp.gz\r\nCopying gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz...\r\n- [1 files][  8.3 KiB\/  8.3 KiB]\r\nOperation completed over 1 objects\/8.3 KiB.\r\n<\/pre>\n<h2>Restauration:<\/h2>\n<p>On peut restaurer sur le PaaS courant ou cr\u00e9er un autre PaaS avec un backup. Pour la restauration d&#8217;un backup complet :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql backups describe 1623083406184 --instance=cloudpg13\r\nbackupKind: SNAPSHOT\r\ndescription: 2021-06-07 18:31\r\nendTime: '2021-06-07T16:30:46.801Z'\r\nenqueuedTime: '2021-06-07T16:30:06.184Z'\r\nid: '1623083406184'\r\ninstance: cloudpg13\r\nkind: sql#backupRun\r\nlocation: europe-west1\r\nselfLink: https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13\/backupRuns\/1623083406184\r\nstartTime: '2021-06-07T16:30:06.185Z'\r\nstatus: SUCCESSFUL\r\ntype: ON_DEMAND\r\nwindowStartTime: '2021-06-07T16:30:06.184Z'\r\n\r\n$ gcloud sql backups restore 1623083406184 --restore-instance=cloudpg13 --backup-instance=cloudpg13\r\nAll current data on the instance will be lost when the backup is\r\nrestored.\r\n\r\nDo you want to continue (Y\/n)?  Y\r\n\r\nRestoring Cloud SQL instance...done.\r\nRestored [https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13].\r\n<\/pre>\n<p>Par contre si on veut restaurer en Point-in-Time, il faudra restaurer sur une nouvelle instance que l&#8217;on va cloner:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql instances clone  cloudpg13 cloudpg13-202106071200 --point-in-time 2021-06-07T17:00:00.000Z\r\nCloning Cloud SQL instance...\\\r\nCreated [https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13-202106071200].\r\nNAME                    DATABASE_VERSION  LOCATION        TIER              PRIMARY_ADDRESS  PRIVATE_ADDRESS  STATUS\r\ncloudpg13-202106071200  POSTGRES_13       europe-west1-b  db-custom-1-3840  35.187.59.124    -                RUNNABLE\r\n<\/pre>\n<p>L&#8217;ancienne instance reste accessible. <\/p>\n<p>Quant aux exports, soit rejouer avec pg_restore ou psql selon le format de sortie (<em>custom <\/em>ou <em>plain text<\/em> respectivement), ou r\u00e9importer la sauvegarde avec gcloud:<\/p>\n<p>Avec psql : <\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres=&gt; create database dvdrental ;\r\nCREATE DATABASE\r\npostgres=&gt; \\c dvdrental\r\npsql (13.3, server 13.2)\r\nYou are now connected to database &quot;dvdrental&quot; as user &quot;postgres&quot;.\r\ndvdrental=&gt; \\i backup.dmp\r\nCREATE TABLE\r\nALTER TABLE\r\nCOPY 200\r\nCOPY 603\r\nCOPY 16\r\nCOPY 600\r\nCOPY 109\r\nCOPY 599\r\nCOPY 1000\r\n(...)\r\n<\/pre>\n<p>Avec gcloud:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql import sql cloudpg13 gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz \\\r\n  --database=dvdrental --user=postgres\r\nData from [gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz] will be\r\nimported to [cloudpg13].\r\n\r\nDo you want to continue (Y\/n)?  Y\r\n\r\nImporting data into Cloud SQL instance...done.\r\nImported data from [gs:\/\/cloudpg13backups\/cloudpg13.dvdrental.dmp.gz] into [https:\/\/sqladmin.googleapis.com\/sql\/v1beta4\/projects\/paas-postgresql-mysql\/instances\/cloudpg13].\r\n<\/pre>\n<p><strong>Note<\/strong>: Pour les imports et les exports customs de donn\u00e9es dans des fichiers plats ou CSV, noter que la commande COPY n\u00e9cessite le privil\u00e8ge SUPERUSER donc elle n&#8217;est pas utilisable dans un script. Elle fonctionnera seulement si les donn\u00e9es sont fournies sur l&#8217;entr\u00e9e standard (<em>stdin<\/em>). Pour contourner ce probl\u00e8me sous le prompt psql, il est possible d&#8217;utiliser la m\u00e9ta-commande <strong>\\copy<\/strong> :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\ndvdrental=&gt; COPY public.actor (actor_id, first_name, last_name, last_update) FROM '3057.dat';\r\nERROR:  must be superuser or a member of the pg_read_server_files role to COPY from a file\r\nHINT:  Anyone can COPY to stdout or from stdin. psql's \\copy command also works for anyone.\r\n\r\ndvdrental=&gt; \\copy public.actor (actor_id, first_name, last_name, last_update) FROM '3057.dat';\r\nCOPY 200\r\n<\/pre>\n<h2>Mise \u00e0 jour et maintenance:<\/h2>\n<p>Comme les autres services manag\u00e9s chez les concurrents, il existe une fen\u00eatre de maintenance <em>impos\u00e9e<\/em> en quelque sorte qui permet de faire passer les mises \u00e0 jour. Durant cette p\u00e9riode l&#8217;acc\u00e8s au PaaS est coup\u00e9, le doc indique une coupure de <a href=\"https:\/\/cloud.google.com\/sql\/docs\/postgres\/maintenance\">60 secondes en moyenne<\/a>, mais la vraie difficult\u00e9 est de pouvoir pr\u00e9voir cette coupure, d&#8217;autant que les instances r\u00e9plicas ne peuvent \u00eatre sollicit\u00e9s sur cette p\u00e9riode car ils sont coup\u00e9s eux-aussi. <\/p>\n<p>Comme sur RDS, il est possible de planifier les fen\u00eatres de maintenance sur des p\u00e9riodes attendues pour \u00e9viter qu&#8217;elles n&#8217;arrivent de mani\u00e8re al\u00e9atoire. <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql instances patch cloudpg13 \\\r\n        --maintenance-window-day=SUN \\\r\n        --maintenance-window-hour=04\r\n<\/pre>\n<p>Il est \u00e9galement possible de planifier une p\u00e9riode durant laquelle la maintenance ne peut pas s&#8217;appliquer avec un maximum de 90 jours, par exemple pour couvrir une p\u00e9riode d&#8217;activit\u00e9 de vente particuli\u00e8rement importante pour votre chiffre d&#8217;affaires (ventes priv\u00e9es, soldes, p\u00e9riode de f\u00eates, etc&#8230;). La p\u00e9riode d&#8217;exclusion peut \u00eatre r\u00e9currente tous les ans si on n&#8217;indique pas l&#8217;ann\u00e9e dans les param\u00e8tres mais seulement une date au format <em>mm-dd<\/em>: <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql instances patch cloudpg13 \\\r\n --deny-maintenance-period-start-date 06-01 \\\r\n --deny-maintenance-period-end-date 06-30 \\\r\n --deny-maintenance-period-time 00:00:00\r\n<\/pre>\n<p>On peut enfin s&#8217;abonner \u00e0 des notifications pour \u00eatre pr\u00e9venu d&#8217;une coupure prochaine, via email, dans les pr\u00e9f\u00e9rences de l&#8217;utilisateur :<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gloudpref.png\" alt=\"\" width=\"976\" height=\"268\" class=\"aligncenter size-full wp-image-8760\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gloudpref.png 976w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gloudpref-300x82.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gloudpref-768x211.png 768w\" sizes=\"auto, (max-width: 976px) 100vw, 976px\" \/><\/p>\n<h2>Supervision :<\/h2>\n<ul>\n<li><strong>Journaux d&#8217;erreur:<\/strong><\/li>\n<\/ul>\n<p>L&#8217;acc\u00e8s aux journaux de PostgreSQL peut se faire soit via la console soit via gcloud loggging :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud logging read resource.type=&quot;cloudsql_database&quot; --project=paas-postgresql-mysql  --limit=1 --format=json\r\n[\r\n  {\r\n    &quot;insertId&quot;: &quot;s=4a2d092448f2442196dbf937d53aa2f3;i=11b17e;b=8d370594a8c64ab6bf7936578969b58d;m=2f32331f4e;t=5c50feb23e9aa;x=56218d300d4ea42d-0-0@a1&quot;,\r\n    &quot;logName&quot;: &quot;projects\/paas-postgresql-mysql\/logs\/cloudsql.googleapis.com%2Fpostgres.log&quot;,\r\n    &quot;receiveTimestamp&quot;: &quot;2021-06-18T20:10:45.252620488Z&quot;,\r\n    &quot;resource&quot;: {\r\n      &quot;labels&quot;: {\r\n        &quot;database_id&quot;: &quot;paas-postgresql-mysql:cloudpg13&quot;,\r\n        &quot;project_id&quot;: &quot;paas-postgresql-mysql&quot;,\r\n        &quot;region&quot;: &quot;europe&quot;\r\n      },\r\n      &quot;type&quot;: &quot;cloudsql_database&quot;\r\n    },\r\n    &quot;severity&quot;: &quot;INFO&quot;,\r\n    &quot;textPayload&quot;: &quot;2021-06-18 20:10:43.733 UTC [52719]: [2-1] db=,user= LOG:  \r\n       automatic analyze of table \\&quot;cloudsqladmin.public.heartbeat\\&quot; system usage: \r\n       CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s&quot;,\r\n    &quot;timestamp&quot;: &quot;2021-06-18T20:10:43.733930Z&quot;\r\n  }\r\n]\r\n<\/pre>\n<p>Avec un outil comme <a href=\"https:\/\/stedolan.github.io\/jq\/\">jq<\/a>, il devient possible de r\u00e9cup\u00e9rer pr\u00e9cis\u00e9ment des informations contenues dans le blob JSON. Une autre solution peut consister \u00e0 exporter les logs sur un autre stockage (Cloud Storage, BigTable ou Elastic Cloud par exemple) via des <a href=\"https:\/\/cloud.google.com\/logging\/docs\/export\">sinks<\/a>.<\/p>\n<ul>\n<li><strong>Compteurs de performance de base:<\/strong><\/li>\n<\/ul>\n<p>La console permet d&#8217;acc\u00e9der \u00e0 8 compteurs de base:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gcloudperf.png\" alt=\"\" width=\"424\" height=\"399\" class=\"alignnone size-full wp-image-8768\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gcloudperf.png 424w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/gcloudperf-300x282.png 300w\" sizes=\"auto, (max-width: 424px) 100vw, 424px\" \/><\/p>\n<ul>\n<li><strong>Monitoring:<\/strong><\/li>\n<\/ul>\n<p>La brique Cloud Monitoring permet d&#8217;aller un peu plus loin et se cr\u00e9er des dashboards plus complets avec davantage de m\u00e9triques:<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf2.png\" alt=\"\" width=\"807\" height=\"412\" class=\"alignnone size-full wp-image-8769\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf2.png 807w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf2-300x153.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf2-768x392.png 768w\" sizes=\"auto, (max-width: 807px) 100vw, 807px\" \/><br \/>\nAinsi que se cr\u00e9er des alertes avec notification.<\/p>\n<ul>\n<li><strong>Optimisation SQL:<\/strong><\/li>\n<\/ul>\n<p><a href=\"https:\/\/cloud.google.com\/sql\/docs\/postgres\/using-insights\">Query Insights <\/a>donne une premi\u00e8re vision des requ\u00eates en ex\u00e9cution si l&#8217;instance est activ\u00e9e avec la trace Insight :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ gcloud sql instances create ... --insights-config-query-insights-enabled\r\n<\/pre>\n<p>Elle est non payante si on y acc\u00e8de via la console, et avec une semaine de r\u00e9tention. Elle pr\u00e9sente les requ\u00eates avec leur temps d&#8217;ex\u00e9cution, le nombre de calls et le nombre moyen de lignes retourn\u00e9es. Deux choses \u00e0 noter cependant:<br \/>\n&#8211; Elle est capable de normaliser les requ\u00eates et de g\u00e9n\u00e9rer un queryid (<em>query_hash <\/em>pour gcloud) comme peut le faire pg_stat_statements:<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\ndvdrental=&gt; select count(1) from inventory where film_id = 995 ;\r\n count\r\n-------\r\n     6\r\n(1 row)\r\n\r\ndvdrental=&gt; select count(1) from inventory where film_id = 1000 ;\r\n count\r\n-------\r\n     8\r\n(1 row)\r\n<\/pre>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf7.png\" alt=\"\" width=\"441\" height=\"249\" class=\"alignnone size-full wp-image-8774\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf7.png 441w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf7-300x169.png 300w\" sizes=\"auto, (max-width: 441px) 100vw, 441px\" \/><br \/>\n&#8211; La deuxi\u00e8me il peut afficher un plan type avec le d\u00e9tail des op\u00e9rateurs: <\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf4.png\" alt=\"\" width=\"365\" height=\"338\" class=\"alignnone size-full wp-image-8771\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf4.png 365w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf4-300x278.png 300w\" sizes=\"auto, (max-width: 365px) 100vw, 365px\" \/><br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf5.png\" alt=\"\" width=\"862\" height=\"810\" class=\"alignnone size-full wp-image-8772\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf5.png 862w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf5-300x282.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloudperf5-768x722.png 768w\" sizes=\"auto, (max-width: 862px) 100vw, 862px\" \/><\/p>\n<p>L&#8217;autre bonne nouvelle c&#8217;est que <a href=\"https:\/\/www.postgresql.org\/docs\/current\/pgstatstatements.html\">pg_stat_statements <\/a>est \u00e9galement disponible et que les actions SUPERUSER-only sont autoris\u00e9es (modifier les param\u00e8tres GUC de PGSS ou appeler pg_stat_statements_reset()). On peut donc se construire un outil ou porter un existant sur des PaaS GCP sans trop de probl\u00e8mes \u00e0 priori. <\/p>\n<h2>MCO hors backups : vacuums, gestion des partitions, etc&#8230;<\/h2>\n<p>Pour planifier des t\u00e2ches hors backups type vacuum full, etc&#8230; on pourra utiliser la brique <a href=\"https:\/\/cloud.google.com\/scheduler\/docs\">Cloud Scheduler<\/a> qui permet via des Cloud Functions serverless et des jobs de lancer des t\u00e2ches planifi\u00e9es. <\/p>\n<h2>Haute disponibilit\u00e9:<\/h2>\n<p>On ne rentrera pas dans les d\u00e9tails, l\u00e0 encore \u00e7a pourra faire l&#8217;objet d&#8217;un post d\u00e9di\u00e9, mais globalement lorsque l&#8217;on d\u00e9ploie un PaaS PostgreSQL, il est possible d&#8217;activer une haute disponibilit\u00e9 au niveau r\u00e9gional. Une instance standby sera cr\u00e9\u00e9 dans une autre zone de la m\u00eame r\u00e9gion, et la synchronisation sera assur\u00e9e au niveau blocs disques. <\/p>\n<p>Cloud SQL \u00e9tant une ressource r\u00e9gionale, il n&#8217;est pas possible de cr\u00e9\u00e9er une standby dans une autre r\u00e9gion, donc pour se pr\u00e9munir d&#8217;une indisponibilit\u00e9 de toute une r\u00e9gion, la solution est de cr\u00e9er un read-replica dans une autre r\u00e9gion, mais qui sera r\u00e9pliqu\u00e9 de mani\u00e8re asynchrone:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloud-sql-mysql-disaster-recovery-complete-failover-fallback-basic-architecture.png\" alt=\"\" width=\"830\" height=\"436\" class=\"alignnone size-full wp-image-8773\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloud-sql-mysql-disaster-recovery-complete-failover-fallback-basic-architecture.png 830w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloud-sql-mysql-disaster-recovery-complete-failover-fallback-basic-architecture-300x158.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/cloud-sql-mysql-disaster-recovery-complete-failover-fallback-basic-architecture-768x403.png 768w\" sizes=\"auto, (max-width: 830px) 100vw, 830px\" \/><br \/>\n(<em>source <\/em>: <a href=\"https:\/\/cloud.google.com\/architecture\/intro-to-cloud-sql-disaster-recovery\">https:\/\/cloud.google.com\/architecture\/intro-to-cloud-sql-disaster-recovery<\/a>)<\/p>\n<p>Dans le cas d&#8217;une indisponibilit\u00e9 de zone, l&#8217;utilisation de cloud_sql_proxy permettra une bascule transparente vers la standby. Toutefois en cas de perte de la r\u00e9gion, la bascule vers le read r\u00e9plica devra se faire manuellement via une <a href=\"https:\/\/cloud.google.com\/architecture\/cloud-sql-postgres-disaster-recovery-complete-failover-fallback\">proc\u00e9dure<\/a>, qui consistera globalement \u00e0 :<br \/>\n1) D\u00e9tacher le read-replica de son primaire en \u00e9chec.<br \/>\n2) Faire un promote sur le r\u00e9plica pour qu&#8217;il devienne primaire<br \/>\n3) S&#8217;assurer qu&#8217;aucune application ne peut plus de connecter \u00e0 l&#8217;ancien primaire ou standby dans la r\u00e9gion initiale, car il n&#8217;y a aucun garde-fou contre le split-brain<br \/>\n4) Faire pointer les applications sur le nouveau primaire dans la deuxi\u00e8me r\u00e9gion.<br \/>\n5) Potentiellement r\u00e9-impl\u00e9menter une standby r\u00e9gionale dans la deuxi\u00e8me r\u00e9gion, et un read r\u00e9plica dans une troisi\u00e8me r\u00e9gion pour se pr\u00e9munir d&#8217;une nouvelle perte de la r\u00e9gion 2.<br \/>\n  Evidemment tout cela pourra \u00eatre automatis\u00e9 via gcloud ou <a href=\"https:\/\/registry.terraform.io\/modules\/GoogleCloudPlatform\/sql-db\/google\/latest\">Terraform<\/a> mais personnellement je ne suis pas un grand fan des solutions qui basculent toutes seules, m\u00eame on-premise. Il vaut mieux avoir une proc\u00e9dure robuste, que chaque \u00e9tape soit script\u00e9e mais garder la main sur l&#8217;encha\u00eenement me para\u00eet plus rassurant. <\/p>\n<h2>Migration on-prem vers Cloud SQL:<\/h2>\n<p><a href=\"https:\/\/cloud.google.com\/sql\/docs\/mysql\/replication\/replication-from-external\">Contrairement \u00e0 Cloud SQL pour MySQL<\/a>, Cloud SQL pour PostgreSQL ne supporte pas la r\u00e9plication depuis une source externe, donc l&#8217;option de migrer un cluster PostgreSQL on-prem via la r\u00e9plicaiton est exclu. Il reste la solution d&#8217;importer directement un pg_dump dans une instance PaaS mais cela supposera une indisponibilit\u00e9 plus grande et qui sera fonction du volume \u00e0 manipuler.<br \/>\nAttention \u00e0 la valeur de <a href=\"https:\/\/www.postgresql.org\/docs\/current\/runtime-config-resource.html\">temp_file_limit <\/a> lors de la recr\u00e9ation des indexes. La valeur par d\u00e9faut pour mon PaaS en 13 est \u00e0 environ 2Gb alors qu&#8217;on-prem il n&#8217;y a pas de limitation. <\/p>\n<p>Autre solution, utiliser le <a href=\"https:\/\/cloud.google.com\/database-migration\/docs\/postgres\">Cloud Migration Service <\/a>mais dont l&#8217;\u00e9tendue des fonctionnalit\u00e9s d\u00e9passe largement le scope de cet article.<\/p>\n<h2>Bilan : <\/h2>\n<p>Lorsque j&#8217;avais \u00e9valu\u00e9 la premi\u00e8re g\u00e9n\u00e9ration de PaaS PostgreSQL sur GCP en 2018, j&#8217;avais trouv\u00e9 pas mal de lacunes, presque rien pour monitorer, une faible surface de configuration. <\/p>\n<p>Le moins qu&#8217;on puisse dire c&#8217;est que depuis des efforts ont \u00e9t\u00e9 faits pour r\u00e9duire la fracture entre les capacit\u00e9s on-prem et les capacit\u00e9s d&#8217;un PaaS \u00e9quivalent: extensions, interfaces, versions, param\u00e8tres GUC, tout y passe. Le seul couac qui reste encore c&#8217;est de franchir la premi\u00e8re marche (la migration vers le PaaS) surtout si l&#8217;on a une grande quantit\u00e9 de donn\u00e9es \u00e0 migrer. Il faudra tester l&#8217;assistant de migration, peut \u00eatre dans un prochain \u00e9pisode !<\/p>\n<p>A+ ~David<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8715&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8715&#038;title=Etude%20PaaS%20PostgreSQL%2013%20sur%20GCP\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=Etude%20PaaS%20PostgreSQL%2013%20sur%20GCP&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8715\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Dans le m\u00eame esprit que ce qui avait \u00e9t\u00e9 fait pour MySQL dans cet \u00e9pisode, nous allons balayer les possibilit\u00e9s qu&#8217;offre la solution Cloud SQL PostgreSQL du cloud de Moutain View. Pour la revue du d\u00e9cor (briques de service, organisation&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":8717,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[296,266],"tags":[297,380,315],"class_list":["post-8715","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-google-cloud-platform","category-postgresql","tag-cloud","tag-cloud-sql","tag-paas"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Etude PaaS PostgreSQL 13 sur GCP - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Etude PaaS PostgreSQL 13 sur GCP - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Dans le m\u00eame esprit que ce qui avait \u00e9t\u00e9 fait pour MySQL dans cet \u00e9pisode, nous allons balayer les possibilit\u00e9s qu&#8217;offre la solution Cloud SQL PostgreSQL du cloud de Moutain View. Pour la revue du d\u00e9cor (briques de service, organisation&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2021-06-18T22:20:57+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/PGGCP.png\" \/>\n\t<meta property=\"og:image:width\" content=\"734\" \/>\n\t<meta property=\"og:image:height\" content=\"335\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"David Baffaleuf\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Baffaleuf\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"19 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\"},\"author\":{\"name\":\"David Baffaleuf\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf\"},\"headline\":\"Etude PaaS PostgreSQL 13 sur GCP\",\"datePublished\":\"2021-06-18T22:20:57+00:00\",\"dateModified\":\"2021-06-18T22:20:57+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\"},\"wordCount\":3833,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"keywords\":[\"cloud\",\"cloud sql\",\"PaaS\"],\"articleSection\":[\"GCP\",\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\",\"name\":\"Etude PaaS PostgreSQL 13 sur GCP - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2021-06-18T22:20:57+00:00\",\"dateModified\":\"2021-06-18T22:20:57+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Etude PaaS PostgreSQL 13 sur GCP\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf\",\"name\":\"David Baffaleuf\",\"sameAs\":[\"http:\/\/www.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/dbaffaleuf\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Etude PaaS PostgreSQL 13 sur GCP - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/","og_locale":"fr_FR","og_type":"article","og_title":"Etude PaaS PostgreSQL 13 sur GCP - Capdata TECH BLOG","og_description":"Dans le m\u00eame esprit que ce qui avait \u00e9t\u00e9 fait pour MySQL dans cet \u00e9pisode, nous allons balayer les possibilit\u00e9s qu&#8217;offre la solution Cloud SQL PostgreSQL du cloud de Moutain View. Pour la revue du d\u00e9cor (briques de service, organisation&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2021-06-18T22:20:57+00:00","og_image":[{"width":734,"height":335,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2021\/06\/PGGCP.png","type":"image\/png"}],"author":"David Baffaleuf","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"David Baffaleuf","Dur\u00e9e de lecture estim\u00e9e":"19 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/"},"author":{"name":"David Baffaleuf","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf"},"headline":"Etude PaaS PostgreSQL 13 sur GCP","datePublished":"2021-06-18T22:20:57+00:00","dateModified":"2021-06-18T22:20:57+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/"},"wordCount":3833,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"keywords":["cloud","cloud sql","PaaS"],"articleSection":["GCP","PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/","url":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/","name":"Etude PaaS PostgreSQL 13 sur GCP - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2021-06-18T22:20:57+00:00","dateModified":"2021-06-18T22:20:57+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/etude-paas-postgresql-13-sur-gcp\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"Etude PaaS PostgreSQL 13 sur GCP"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf","name":"David Baffaleuf","sameAs":["http:\/\/www.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/dbaffaleuf\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8715","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=8715"}],"version-history":[{"count":9,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8715\/revisions"}],"predecessor-version":[{"id":8775,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8715\/revisions\/8775"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/8717"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=8715"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=8715"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=8715"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}