{"id":10150,"date":"2023-06-06T13:21:23","date_gmt":"2023-06-06T12:21:23","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=10150"},"modified":"2023-06-07T07:28:41","modified_gmt":"2023-06-07T06:28:41","slug":"pgo-operateurs-kubernetes-pour-postgresql-la-suite","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/","title":{"rendered":"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite !"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10150&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10150&#038;title=PGO%20%3A%20op%C3%A9rateurs%20kubernetes%20pour%20PostgreSQL%2C%20la%20suite%20%21\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PGO%20%3A%20op%C3%A9rateurs%20kubernetes%20pour%20PostgreSQL%2C%20la%20suite%20%21&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10150\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p>Salut \u00e0 toutes et tous ! Cette semaine la suite de notre petit tour des op\u00e9rateurs Kubernetes pour PostgreSQL, et apr\u00e8s <a href=\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\">kubegres<\/a>, c&#8217;est au tour de <a href=\"https:\/\/access.crunchydata.com\/documentation\/postgres-operator\/v5\/\">PGO <\/a>de CrunchyData. <\/p>\n<p><a href=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2.png\" alt=\"\" width=\"954\" height=\"717\" class=\"aligncenter size-full wp-image-10154\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2.png 954w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2-300x225.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2-768x577.png 768w\" sizes=\"auto, (max-width: 954px) 100vw, 954px\" \/><\/a><\/p>\n<h2>Quelques infos g\u00e9n\u00e9rales sur l&#8217;op\u00e9rateur PGO<\/h2>\n<p>Compar\u00e9 \u00e0 Kubegres, PGO semble plus complet dans le sens o\u00f9 il int\u00e8gre de base un r\u00e9plica par d\u00e9faut, mais aussi la possibilit\u00e9 de backuper directement avec <a href=\"https:\/\/pgbackrest.org\/\">pgBackRest<\/a> dans des repositories locaux ou cloud, un pod <a href=\"https:\/\/www.pgbouncer.org\/\">pgBouncer<\/a>, et un exporter pour <a href=\"https:\/\/prometheus.io\/\">Prometheus<\/a>. <\/p>\n<p><a href=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo1-1.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo1-1.png\" alt=\"\" width=\"1074\" height=\"660\" class=\"aligncenter size-full wp-image-10156\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo1-1.png 1074w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo1-1-300x184.png 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo1-1-1024x629.png 1024w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo1-1-768x472.png 768w\" sizes=\"auto, (max-width: 1074px) 100vw, 1074px\" \/><\/a><br \/>\n<center>(source : <a href=\"https:\/\/access.crunchydata.com\/documentation\/postgres-operator\/v5\/architecture\/overview\/\">https:\/\/access.crunchydata.com\/documentation\/postgres-operator\/v5\/architecture\/overview\/<\/a>)<\/center><\/p>\n<p>Comme pour kubegres, l&#8217;operateur PGO encapsule \u00e0 l&#8217;int\u00e9rieur de ses <em>deployments <\/em>des objets de base Kubernetes tels que des StatefulSets pour les pods primaire et replicas, des Services, des PV et PVC pour le stockage, etc&#8230; comme nous allons le voir lors du deploiement de notre premier cluster. <\/p>\n<h2>Installation de l&#8217;op\u00e9rateur PGO<\/h2>\n<p>Premi\u00e8re chose \u00e0 faire avant de cr\u00e9er notre premier cluster, d\u00e9ployer l&#8217;op\u00e9rateur PGO. Il est possible de le faire au choix soit via <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/manage-kubernetes-objects\/kustomization\/\">Kustomize <\/a>soit via <a href=\"https:\/\/helm.sh\/\">Helm<\/a>. CrunchyData propose <a href=\"https:\/\/github.com\/CrunchyData\/postgres-operator-examples\/fork\">un repo git \u00e0 cloner<\/a> et qui contient d\u00e9j\u00e0 les fichiers de configuration de base, que nous pourrons modifier au besoin pour customiser notre d\u00e9ploiement. Une fois le git clon\u00e9 sur notre github Capdata, nous pouvons commencer \u00e0 r\u00e9cup\u00e9rer les fichiers en local et regarder le contenu des fichiers de d\u00e9finition. Nous utiliserons Kustomize pour l&#8217;exemple :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ git clone --depth=1 &quot;https:\/\/github.com\/Capdata\/postgres-operator-examples.git&quot;\r\nCloning into 'postgres-operator-examples'...\r\nremote: Enumerating objects: 140, done.\r\nremote: Counting objects: 100% (140\/140), done.\r\nremote: Compressing objects: 100% (105\/105), done.\r\nremote: Total 140 (delta 33), reused 81 (delta 26), pack-reused 0\r\nReceiving objects: 100% (140\/140), 150.57 KiB | 3.01 MiB\/s, done.\r\nResolving deltas: 100% (33\/33), done.\r\n\r\n$ cd postgres-operator-examples\/kustomize\r\n\r\n$ tree -a install\/namespace\/\r\ninstall\/namespace\/\r\n\u251c\u2500\u2500 kustomization.yaml\r\n\u2514\u2500\u2500 namespace.yaml\r\n\r\n$ tree -a install\/default\/\r\ninstall\/default\/\r\n\u251c\u2500\u2500 kustomization.yaml\r\n\u2514\u2500\u2500 selectors.yaml\r\n\r\n<\/pre>\n<p>L&#8217;apply de ~kustomize\/install\/namespace\/namespace.yaml va cr\u00e9er un namespace d\u00e9di\u00e9 <em>postgres-operator<\/em>: <\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\r\napiVersion: v1\r\nkind: Namespace\r\nmetadata:\r\n  name: postgres-operator\r\n<\/pre>\n<p>Puis ~kustomize\/install\/default va cr\u00e9er le reste de l&#8217;op\u00e9rateur:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl apply --kustomize=kustomize\/install\/namespace\r\nnamespace\/postgres-operator created\r\n\r\n$ kubectl apply --server-side --kustomize=kustomize\/install\/default\r\ncustomresourcedefinition.apiextensions.k8s.io\/pgupgrades.postgres-operator.crunchydata.com serverside-applied\r\ncustomresourcedefinition.apiextensions.k8s.io\/postgresclusters.postgres-operator.crunchydata.com serverside-applied\r\nserviceaccount\/pgo serverside-applied\r\nserviceaccount\/postgres-operator-upgrade serverside-applied\r\nclusterrole.rbac.authorization.k8s.io\/postgres-operator serverside-applied\r\nclusterrole.rbac.authorization.k8s.io\/postgres-operator-upgrade serverside-applied\r\nclusterrolebinding.rbac.authorization.k8s.io\/postgres-operator serverside-applied\r\nclusterrolebinding.rbac.authorization.k8s.io\/postgres-operator-upgrade serverside-applied\r\ndeployment.apps\/pgo serverside-applied\r\ndeployment.apps\/pgo-upgrade serverside-applied\r\n\r\n$ kubectl get all --namespace=postgres-operator\r\nNAME                               READY   STATUS    RESTARTS   AGE\r\npod\/pgo-774db98dbc-htm5d           1\/1     Running   0          74m\r\npod\/pgo-upgrade-785dd6dc4c-cw2ld   1\/1     Running   0          74m\r\n\r\nNAME                          READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/pgo           1\/1     1            1           74m\r\ndeployment.apps\/pgo-upgrade   1\/1     1            1           74m\r\n\r\nNAME                                     DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/pgo-774db98dbc           1         1         1       74m\r\nreplicaset.apps\/pgo-upgrade-785dd6dc4c   1         1         1       74m\r\n\r\n<\/pre>\n<h2>Cr\u00e9ation d&#8217;un premier cluster PGO<\/h2>\n<p>Maintenant que notre op\u00e9rateur est install\u00e9, c&#8217;est le moment de s&#8217;int\u00e9resser au param\u00e9trage du futur cluster. Tout se trouve dans ~kustomize\/postgres:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ tree -a postgres\/\r\npostgres\/\r\n\u251c\u2500\u2500 kustomization.yaml\r\n\u2514\u2500\u2500 postgres.yaml\r\n<\/pre>\n<p>Le coeur de notre cluster se trouve dans postgresl.yaml :<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\r\napiVersion: postgres-operator.crunchydata.com\/v1beta1\r\nkind: PostgresCluster\r\nmetadata:\r\n  name: hippo\r\nspec:\r\n  image: registry.developers.crunchydata.com\/crunchydata\/crunchy-postgres:ubi8-15.2-0\r\n  postgresVersion: 15\r\n  instances:\r\n    - name: instance1\r\n      dataVolumeClaimSpec:\r\n        accessModes:\r\n        - &quot;ReadWriteOnce&quot;\r\n        resources:\r\n          requests:\r\n            storage: 1Gi\r\n  backups:\r\n    pgbackrest:\r\n      image: registry.developers.crunchydata.com\/crunchydata\/crunchy-pgbackrest:ubi8-2.41-4\r\n      repos:\r\n      - name: repo1\r\n        volume:\r\n          volumeClaimSpec:\r\n            accessModes:\r\n            - &quot;ReadWriteOnce&quot;\r\n            resources:\r\n              requests:\r\n                storage: 1Gi\r\n<\/pre>\n<p>Comme pour Kubegres, l&#8217;op\u00e9rateur PGO nous permet de cr\u00e9er un nouveau type d&#8217;objet dans Kubernetes :<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\r\nkind: PostgresCluster\r\n<\/pre>\n<p>Le nom du cluster par d\u00e9faut est &#8216;<em>hippo<\/em>&#8216; mais nous pourrons le changer sans probl\u00e8me. Pour les pods (primaire, r\u00e9plicas, pgBackRest), les images sont pr\u00e9cis\u00e9es ainsi que les volumes qui sont rattach\u00e9s via des abstractions de PVC appel\u00e9es soit &#8220;<em>dataVolumeClaimSpec<\/em>&#8221; pour les pods PostgreSQL soit &#8220;<em>VolumeClaimSpec<\/em>&#8221; pour la partie sauvegarde. <\/p>\n<p>Nous pouvons compl\u00e9ter le fichier de d\u00e9finition par d\u00e9faut avec quelques customisations:<br \/>\n&#8211; Ajouter des quotas de ressources CPU et m\u00e9moire via <em>instances.resources.limits<\/em><br \/>\n&#8211; Ajouter un r\u00e9plica<br \/>\n&#8211; Renommer notre cluster &#8216;<em>pgcluster1<\/em>&#8216;<br \/>\n&#8211; Et enfin ajouter un nodePort pour exposer notre cluster au monde ext\u00e9rieur :<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\r\napiVersion: postgres-operator.crunchydata.com\/v1beta1\r\nkind: PostgresCluster\r\nmetadata:\r\n  name: pgcluster1\r\nspec:\r\n  image: registry.developers.crunchydata.com\/crunchydata\/crunchy-postgres:ubi8-15.2-0\r\n  postgresVersion: 15\r\n  instances:\r\n    - name: postgresdb1\r\n      replicas: 2\r\n      resources:\r\n        limits:\r\n          cpu: &quot;0.5&quot;\r\n          memory: 1Gi\r\n      dataVolumeClaimSpec:\r\n        accessModes:\r\n        - &quot;ReadWriteOnce&quot;\r\n        resources:\r\n          requests:\r\n            storage: 1Gi\r\n  backups:\r\n    pgbackrest:\r\n      image: registry.developers.crunchydata.com\/crunchydata\/crunchy-pgbackrest:ubi8-2.41-4\r\n      repos:\r\n      - name: repo1\r\n        volume:\r\n          volumeClaimSpec:\r\n            accessModes:\r\n            - &quot;ReadWriteOnce&quot;\r\n            resources:\r\n              requests:\r\n                storage: 1Gi\r\n  service:\r\n    metadata:\r\n      annotations:\r\n        annotation1: &quot;mdnodeport1&quot;\r\n      labels:\r\n        label1: &quot;32000&quot;\r\n    type: NodePort\r\n    nodePort: 32000\r\n<\/pre>\n<p>Pour la configuration des sauvegardes, nous verrons un peu plus tard. Dans l&#8217;imm\u00e9diat, cr\u00e9ons notre cluster:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl apply -k kustomize\/postgres\/\r\npostgrescluster.postgres-operator.crunchydata.com\/pgcluster1 created\r\n\r\n$ kubectl get all --namespace=postgres-operator\r\nNAME                                READY   STATUS    RESTARTS   AGE\r\npod\/pgcluster1-postgresdb1-55pl-0   4\/4     Running   0          11s\r\npod\/pgcluster1-postgresdb1-9w2w-0   4\/4     Running   0          11s\r\npod\/pgcluster1-repo-host-0          2\/2     Running   0          11s\r\npod\/pgo-774db98dbc-tshp6            1\/1     Running   0          68s\r\npod\/pgo-upgrade-785dd6dc4c-ntwkd    1\/1     Running   0          68s\r\n\r\nNAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\r\nservice\/pgcluster1-ha          NodePort    10.102.232.74   &lt;none&gt;        5432:32000\/TCP   51m\r\nservice\/pgcluster1-ha-config   ClusterIP   None            &lt;none&gt;        &lt;none&gt;           51m\r\nservice\/pgcluster1-pods        ClusterIP   None            &lt;none&gt;        &lt;none&gt;           51m\r\nservice\/pgcluster1-primary     ClusterIP   None            &lt;none&gt;        5432\/TCP         51m\r\nservice\/pgcluster1-replicas    ClusterIP   10.106.148.50   &lt;none&gt;        5432\/TCP         51m\r\n\r\nNAME                          READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/pgo           1\/1     1            1           69s\r\ndeployment.apps\/pgo-upgrade   1\/1     1            1           68s\r\n\r\nNAME                                     DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/pgo-774db98dbc           1         1         1       68s\r\nreplicaset.apps\/pgo-upgrade-785dd6dc4c   1         1         1       68s\r\n\r\nNAME                                           READY   AGE\r\nstatefulset.apps\/pgcluster1-postgresdb1-55pl   1\/1     11s\r\nstatefulset.apps\/pgcluster1-postgresdb1-9w2w   1\/1     11s\r\nstatefulset.apps\/pgcluster1-repo-host          1\/1     11s\r\n<\/pre>\n<p>Il se peut qu&#8217;il y ait des probl\u00e8mes de quota m\u00e9moire \/ CPU disponible, les pods vont rester en Pending, et la suppression \/ recr\u00e9ation des objets ne suffit pas. La suppression du namespace se bloque en Terminating il a fallu que je d\u00e9roule la <a href=\"https:\/\/www.ibm.com\/docs\/en\/cloud-private\/3.2.0?topic=console-namespace-is-stuck-in-terminating-state\">proc\u00e9dure de suppression du namespace \u00e0 la main<\/a> pour repartir de z\u00e9ro. <\/p>\n<p>Bref notre d\u00e9ploiement nous a donc cr\u00e9\u00e9 3 nouveaux pods et 3 StatefulSets (primaire, r\u00e9plica et pgBackRest), 4 services ClusterIP et notre nodePort. <\/p>\n<p>Pour nous y connecter, nous avons besoin de r\u00e9cup\u00e9rer <a href=\"https:\/\/access.crunchydata.com\/documentation\/postgres-operator\/5.3.1\/tutorial\/connect-cluster\/\">le secret<\/a> qui a \u00e9t\u00e9 cr\u00e9\u00e9 \u00e0 l&#8217;initialisation du cluster. Jetons un coup d&#8217;oeil au secret dans sa globalit\u00e9 pour voir ce qu&#8217;il contient:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl get secret --namespace=postgres-operator pgcluster1-pguser-pgcluster1 -o json\r\n{\r\n    &quot;apiVersion&quot;: &quot;v1&quot;,\r\n    &quot;data&quot;: {\r\n        &quot;dbname&quot;: &quot;cGdjbHVzdGVyMQ==&quot;,\r\n        &quot;host&quot;: &quot;cGdjbHVzdGVyMS1wcmltYXJ5LnBvc3RncmVzLW9wZXJhdG9yLnN2Yw==&quot;,\r\n        &quot;jdbc-uri&quot;: &quot;amRiYzpwb3N0Z3Jlc3FsOi8vcGdjbHVzdGVyMS1wcmltYXJ5LnBvc3RncmVzLW9wZXJhdG9yLnN2Yzo1NDMyL3BnY2x1c3RlcjE\/cGFzc3dvcmQ9UHBybiUzQnZ1WDlrSiU1RE1WQnZwd3QzTk5wJTJBJnVzZXI9cGdjbHVzdGVyMQ==&quot;,\r\n        &quot;password&quot;: &quot;UHBybjt2dVg5a0pdTVZCdnB3dDNOTnAq&quot;,\r\n        &quot;port&quot;: &quot;NTQzMg==&quot;,\r\n        &quot;uri&quot;: &quot;cG9zdGdyZXNxbDovL3BnY2x1c3RlcjE6UHBybjt2dVg5a0olNURNVkJ2cHd0M05OcCUyQUBwZ2NsdXN0ZXIxLXByaW1hcnkucG9zdGdyZXMtb3BlcmF0b3Iuc3ZjOjU0MzIvcGdjbHVzdGVyMQ==&quot;,\r\n        &quot;user&quot;: &quot;cGdjbHVzdGVyMQ==&quot;,\r\n        &quot;verifier&quot;: &quot;U0NSQU0tU0hBLTI1NiQ0MDk2Olo3OTNBUVIwU0xZUVBDY3BXNkRaSXc9PSRWUWdlc0VlSGVvVnpnakc4emkyRGJJNmlpemo1ZnJGWmN2K3c3NzZScVhVPTpDT1JDVStoQU1IeDBkRzBKaGU3dllwUTdFWTB4QzZ5RzJUUE5NWFV5MTlRPQ==&quot;\r\n    },\r\n    &quot;kind&quot;: &quot;Secret&quot;,\r\n    &quot;metadata&quot;: {\r\n        &quot;creationTimestamp&quot;: &quot;2023-06-05T11:39:32Z&quot;,\r\n        &quot;labels&quot;: {\r\n            &quot;postgres-operator.crunchydata.com\/cluster&quot;: &quot;pgcluster1&quot;,\r\n            &quot;postgres-operator.crunchydata.com\/pguser&quot;: &quot;pgcluster1&quot;,\r\n            &quot;postgres-operator.crunchydata.com\/role&quot;: &quot;pguser&quot;\r\n        },\r\n        &quot;name&quot;: &quot;pgcluster1-pguser-pgcluster1&quot;,\r\n        &quot;namespace&quot;: &quot;postgres-operator&quot;,\r\n        &quot;ownerReferences&quot;: [\r\n            {\r\n                &quot;apiVersion&quot;: &quot;postgres-operator.crunchydata.com\/v1beta1&quot;,\r\n                &quot;blockOwnerDeletion&quot;: true,\r\n                &quot;controller&quot;: true,\r\n                &quot;kind&quot;: &quot;PostgresCluster&quot;,\r\n                &quot;name&quot;: &quot;pgcluster1&quot;,\r\n                &quot;uid&quot;: &quot;80bbee62-0602-4012-9c06-dcd23ca7723b&quot;\r\n            }\r\n        ],\r\n        &quot;resourceVersion&quot;: &quot;169451&quot;,\r\n        &quot;uid&quot;: &quot;fe833c97-0585-4910-95ca-fb1c7774d5b2&quot;\r\n    },\r\n    &quot;type&quot;: &quot;Opaque&quot;\r\n}\r\n<\/pre>\n<p>On a donc la possibilit\u00e9 de r\u00e9cup\u00e9rer le user et le mot de passe :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ export PGUSER=$(kubectl get secret --namespace=postgres-operator pgcluster1-pguser-pgcluster1 -o jsonpath={.data.user} | base64 -d)\r\n$ export PGPASSWORD=$(kubectl get secret --namespace=postgres-operator pgcluster1-pguser-pgcluster1 -o jsonpath={.data.password} | base64 -d)\r\n<\/pre>\n<p>Et tester la connexion (noter que l&#8217;adresse IP est celle du node, ie <em>kubectl describe nodes<\/em>):<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ psql -h 192.168.59.101 -p 32000 -c &quot;select version();&quot;\r\n                                                 version\r\n---------------------------------------------------------------------------------------------------------\r\n PostgreSQL 15.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-16), 64-bit\r\n(1 row)\r\n<\/pre>\n<h2>Bascules et haute disponibilit\u00e9<\/h2>\n<p>Lan\u00e7ons une connexion en boucle sur le nodePort pour r\u00e9cup\u00e9rer l&#8217;IP de l&#8217;instance primaire et voir ce qui se passe en cas de bascule. <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ while(true); do psql -h 192.168.59.101 -p 32000 -c &quot;select inet_server_addr();&quot;; sleep 1; done\r\n inet_server_addr\r\n------------------\r\n 172.17.0.7\r\n(1 row)\r\n\r\n inet_server_addr\r\n------------------\r\n 172.17.0.7\r\n(1 row)\r\n\r\n(...)\r\n<\/pre>\n<p>Pour tester la bascule, nous allons carr\u00e9ment supprimer le StatefulSet de l&#8217;instance primaire, il faut commencer par r\u00e9cup\u00e9rer son nom, puis on supprime :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl -n postgres-operator get pods \\\r\n  --selector=postgres-operator.crunchydata.com\/role=master \\\r\n  -o jsonpath='{.items[*].metadata.labels.postgres-operator\\.crunchydata\\.com\/instance}'\r\npgcluster1-postgresdb1-9w2w\r\n\r\n$ kubectl delete statefulset --namespace=postgres-operator pgcluster1-postgresdb1-9w2w\r\nstatefulset.apps &quot;pgcluster1-postgresdb1-9w2w&quot; deleted\r\n<\/pre>\n<p>La connexion en boucle indique que l&#8217;on a bien chang\u00e9 d&#8217;IP:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n(...)\r\n inet_server_addr\r\n------------------\r\n 172.17.0.7\r\n(1 row)\r\n\r\n inet_server_addr\r\n------------------\r\n 172.17.0.7\r\n(1 row)\r\n\r\n inet_server_addr\r\n------------------\r\n 172.17.0.6\r\n(1 row)\r\n\r\n inet_server_addr\r\n------------------\r\n 172.17.0.6\r\n(1 row)\r\n<\/pre>\n<p>&#8230; et minikube a d\u00e9tect\u00e9 la perte du StatefulSet <em>pgcluster1-postgresdb1-9w2w<\/em> et l&#8217;a recr\u00e9\u00e9 en arri\u00e8re plan:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl get statefulset --namespace=postgres-operator\r\nNAME                          READY   AGE\r\npgcluster1-postgresdb1-55pl   1\/1     96m\r\npgcluster1-postgresdb1-9w2w   1\/1     11s\r\npgcluster1-repo-host          1\/1     96m\r\n<\/pre>\n<p>Il existe un certain nombre d&#8217;options compl\u00e9mentaires notamment de l&#8217;anti-affinit\u00e9 pour \u00e9viter que les pods ne tournent sur les m\u00eames nodes, voir la <a href=\"https:\/\/access.crunchydata.com\/documentation\/postgres-operator\/5.3.1\/tutorial\/high-availability\/\">documentation <\/a>pour plus de d\u00e9tails.<\/p>\n<h2>Mise en place des backups via pgbackrest<\/h2>\n<p>Bien qu&#8217;il soit possible de sauvegarder directement sur AWS S3, Azure ou GCP, pour l&#8217;exemple nous avons d\u00e9ploy\u00e9 un volume Kubernetes simple.<br \/>\nPour ajouter une planification et une r\u00e9tention il faut rajouter quelques propri\u00e9t\u00e9s \u00e0 <em>spec.backups.pgbackrest<\/em> :<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\r\n  backups:\r\n    pgbackrest:\r\n      image: registry.developers.crunchydata.com\/crunchydata\/crunchy-pgbackrest:ubi8-2.41-4\r\n        global:\r\n\t  repo1-retention-full: &quot;14&quot;\r\n          repo1-retention-full-type: time\r\n      repos:\r\n      - name: repo1\r\n        volume:\r\n          volumeClaimSpec:\r\n            accessModes:\r\n            - &quot;ReadWriteOnce&quot;\r\n            resources:\r\n              requests:\r\n                storage: 1Gi\r\n\tschedules:\r\n\t  full: &quot;50 15 * * *&quot;\r\n<\/pre>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl apply -k kustomize\/postgres\/\r\npostgrescluster.postgres-operator.crunchydata.com\/pgcluster1 configured\r\n\r\n$ kubectl get cronjobs --namespace=postgres-operator\r\nNAME                    SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE\r\npgcluster1-repo1-full   50 15 * * *    False     0        &lt;none&gt;          106s\r\n<\/pre>\n<p>Kubernetes a cr\u00e9\u00e9 un cronjob associ\u00e9. Sachant que les backups peuvent \u00eatre aussi diff\u00e9rentiels ou incr\u00e9mentaux selon la strat\u00e9gie de sauvegarde envisag\u00e9e.<br \/>\nLa r\u00e9tention quant \u00e0 elle peut \u00eatre indiqu\u00e9e en jours (<em>time<\/em>) ou en nombre de sauvegardes (<em>count<\/em>). Gr\u00e2ce \u00e0 pgBackRest, PGO permettra ensuite d&#8217;utiliser les backups pour soit cloner les bases vers une autre cluster, soit le restaurer \u00e0 un point dans le temps vers un nouveau cluster (pour comparer les donn\u00e9es ou r\u00e9cup\u00e9rer des lignes supprim\u00e9es par erreur par exemple), ou restaurer in-place. Le sujet est assez long et cela pourra faire l&#8217;objet d&#8217;un futur \u00e9pisode, mais globalement la puissance de feu de pgBackRest au service de la restaurabilit\u00e9 donne un atout suppl\u00e9mentaire \u00e0 PGO par rapport \u00e0 ses concurrents. <\/p>\n<h2>Conclusion<\/h2>\n<p>Dans cette premi\u00e8re approche de PGO, nous n&#8217;avons fait qu&#8217;effleurer la surface des possibilit\u00e9s de cet op\u00e9rateur, qui semble aller plus loin que ses concurrents avec:<br \/>\n&#8211; La sauvegarde et restauration via pgBackRest int\u00e9gr\u00e9, et la possibilit\u00e9 de sauvegarder directement dans le cloud.<br \/>\n&#8211; Int\u00e9gration avec Promotheus.<br \/>\n&#8211; Int\u00e9gration avec pgBouncer.<br \/>\n&#8211; D\u00e9ployable directement via des standards tels que Kustomize ou Helm.<br \/>\n&#8211; Gestion des secrets int\u00e9gr\u00e9e.<br \/>\netc&#8230; S\u00fbrement que d&#8217;autres articles pour approfondir PGO viendront compl\u00e9ter celui-ci, en attendant bonne lecture et \u00e0 bient\u00f4t sur le blog Cap Data !<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10150&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10150&#038;title=PGO%20%3A%20op%C3%A9rateurs%20kubernetes%20pour%20PostgreSQL%2C%20la%20suite%20%21\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PGO%20%3A%20op%C3%A9rateurs%20kubernetes%20pour%20PostgreSQL%2C%20la%20suite%20%21&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F10150\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Salut \u00e0 toutes et tous ! Cette semaine la suite de notre petit tour des op\u00e9rateurs Kubernetes pour PostgreSQL, et apr\u00e8s kubegres, c&#8217;est au tour de PGO de CrunchyData. Quelques infos g\u00e9n\u00e9rales sur l&#8217;op\u00e9rateur PGO Compar\u00e9 \u00e0 Kubegres, PGO semble&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":10154,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[383,442,266],"tags":[],"class_list":["post-10150","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-container","category-devops","category-postgresql"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite ! - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite ! - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Salut \u00e0 toutes et tous ! Cette semaine la suite de notre petit tour des op\u00e9rateurs Kubernetes pour PostgreSQL, et apr\u00e8s kubegres, c&#8217;est au tour de PGO de CrunchyData. Quelques infos g\u00e9n\u00e9rales sur l&#8217;op\u00e9rateur PGO Compar\u00e9 \u00e0 Kubegres, PGO semble&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2023-06-06T12:21:23+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-06-07T06:28:41+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2.png\" \/>\n\t<meta property=\"og:image:width\" content=\"954\" \/>\n\t<meta property=\"og:image:height\" content=\"717\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"David Baffaleuf\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Baffaleuf\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\"},\"author\":{\"name\":\"David Baffaleuf\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf\"},\"headline\":\"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite !\",\"datePublished\":\"2023-06-06T12:21:23+00:00\",\"dateModified\":\"2023-06-07T06:28:41+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\"},\"wordCount\":2090,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"articleSection\":[\"Container\",\"Devops\",\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\",\"name\":\"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite ! - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2023-06-06T12:21:23+00:00\",\"dateModified\":\"2023-06-07T06:28:41+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite !\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf\",\"name\":\"David Baffaleuf\",\"sameAs\":[\"http:\/\/www.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/dbaffaleuf\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite ! - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/","og_locale":"fr_FR","og_type":"article","og_title":"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite ! - Capdata TECH BLOG","og_description":"Salut \u00e0 toutes et tous ! Cette semaine la suite de notre petit tour des op\u00e9rateurs Kubernetes pour PostgreSQL, et apr\u00e8s kubegres, c&#8217;est au tour de PGO de CrunchyData. Quelques infos g\u00e9n\u00e9rales sur l&#8217;op\u00e9rateur PGO Compar\u00e9 \u00e0 Kubegres, PGO semble&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2023-06-06T12:21:23+00:00","article_modified_time":"2023-06-07T06:28:41+00:00","og_image":[{"width":954,"height":717,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/06\/pgo2.png","type":"image\/png"}],"author":"David Baffaleuf","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"David Baffaleuf","Dur\u00e9e de lecture estim\u00e9e":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/"},"author":{"name":"David Baffaleuf","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf"},"headline":"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite !","datePublished":"2023-06-06T12:21:23+00:00","dateModified":"2023-06-07T06:28:41+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/"},"wordCount":2090,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"articleSection":["Container","Devops","PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/","url":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/","name":"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite ! - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2023-06-06T12:21:23+00:00","dateModified":"2023-06-07T06:28:41+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/pgo-operateurs-kubernetes-pour-postgresql-la-suite\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"PGO : op\u00e9rateurs kubernetes pour PostgreSQL, la suite !"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf","name":"David Baffaleuf","sameAs":["http:\/\/www.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/dbaffaleuf\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10150","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=10150"}],"version-history":[{"count":6,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10150\/revisions"}],"predecessor-version":[{"id":10159,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/10150\/revisions\/10159"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/10154"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=10150"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=10150"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=10150"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}