{"id":9778,"date":"2023-04-26T17:17:16","date_gmt":"2023-04-26T16:17:16","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=9778"},"modified":"2023-04-26T17:17:16","modified_gmt":"2023-04-26T16:17:16","slug":"kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/","title":{"rendered":"Kubegres : l&#8217;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9778&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9778&#038;title=Kubegres%20%3A%20l%E2%80%99op%C3%A9rateur%20Kubernetes%20cl%C3%A9%20en%20main%20pour%20PostgreSQL\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=Kubegres%20%3A%20l%E2%80%99op%C3%A9rateur%20Kubernetes%20cl%C3%A9%20en%20main%20pour%20PostgreSQL&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9778\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p><a href=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/04\/2containers.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/04\/2containers.png\" alt=\"\" width=\"623\" height=\"416\" class=\"left size-full wp-image-10070\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/04\/2containers.png 623w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/04\/2containers-300x200.png 300w\" sizes=\"auto, (max-width: 623px) 100vw, 623px\" \/><\/a><\/p>\n<p>Hello \u00e0 toutes et tous !<\/p>\n<p>Pour faire suite \u00e0 <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\">l&#8217;article d&#8217;Emmanuel<\/a> sur l&#8217;installation de PostgreSQL sur un cluster minikube local, aujourd&#8217;hui nous allons d\u00e9couvrir l&#8217;op\u00e9rateur <a href=\"https:\/\/www.kubegres.io\/\">Kubegres <\/a>qui permet de facilement d\u00e9ployer un cluster PostgreSQL avec Primary et Standby \u00e0 l&#8217;int\u00e9rieur de pods K8s, sans avoir \u00e0 cr\u00e9er chaque brique une par une comme on devrait le faire avec un simple StatefulSet. <\/p>\n<h2>Pourquoi Kubegres apporte un vrai plus<\/h2>\n<p>En un mot : <strong>SIM-PLI-CI-TE !<\/strong> <\/p>\n<p>Parce qu&#8217;il introduit un nouveau type d&#8217;objet : <\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">kind: Kubegres\r\n<\/pre>\n<p>Les specs \u00e0 l&#8217;int\u00e9rieur de cet objet encapsulent d\u00e9j\u00e0 tout ce qui est n\u00e9cessaire pour cr\u00e9er les StatefulSets, les ClusterIPs, le Physical Volumes et le PVC associ\u00e9, la ConfigMap et les Pods. Pas besoin de tout cr\u00e9er \u00e0 l&#8217;avance et le fichier de d\u00e9ploiement est beaucoup plus compact !<\/p>\n<h2>Installation de Kubegres <\/h2>\n<p>Nous utiliserons <a href=\"https:\/\/kubernetes.io\/fr\/docs\/setup\/learning-environment\/minikube\/\">minikube <\/a>pour montrer comment d\u00e9ployer Kubegres, je vous renvoie \u00e0 l&#8217;article d&#8217;Emmanuel cit\u00e9 plus haut pour son installation. Comme nous allons devoir attribuer des fractions de ressource RAM et CPU entre les pods, j&#8217;activerai juste en plus la partie metrics-server :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">$ minikube addons enable metrics-server<\/pre>\n<p>La premi\u00e8re \u00e9tape consiste \u00e0 installer l&#8217;op\u00e9rateur. Cette premi\u00e8re \u00e9tape va cr\u00e9er un nombre importants d&#8217;objets pour nous, comme en t\u00e9moigne le contenu de son fichier de d\u00e9ploiement <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ curl --silent https:\/\/raw.githubusercontent.com\/reactive-tech\/kubegres\/v1.16\/kubegres.yaml \\\r\n  | grep  -w 'kind:' | awk -F':' '{print $2}'  \\ \r\n  | sort -u\r\n\r\n ClusterRole\r\n ClusterRoleBinding\r\n ConfigMap\r\n ControllerManagerConfig\r\n CustomResourceDefinition\r\n Deployment\r\n Kubegres\r\n Namespace\r\n Role\r\n RoleBinding\r\n Service\r\n ServiceAccount\r\n\r\n<\/pre>\n<p>:<br \/>\nOn d\u00e9ploie donc via <a href=\"https:\/\/kubernetes.io\/fr\/docs\/tasks\/tools\/install-kubectl\/\">kubectl<\/a>:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl apply -f https:\/\/raw.githubusercontent.com\/reactive-tech\/kubegres\/v1.16\/kubegres.yaml\r\nnamespace\/kubegres-system created\r\ncustomresourcedefinition.apiextensions.k8s.io\/kubegres.kubegres.reactive-tech.io created\r\nserviceaccount\/kubegres-controller-manager created\r\nrole.rbac.authorization.k8s.io\/kubegres-leader-election-role created\r\nclusterrole.rbac.authorization.k8s.io\/kubegres-manager-role created\r\nclusterrole.rbac.authorization.k8s.io\/kubegres-metrics-reader created\r\nclusterrole.rbac.authorization.k8s.io\/kubegres-proxy-role created\r\nrolebinding.rbac.authorization.k8s.io\/kubegres-leader-election-rolebinding created\r\nclusterrolebinding.rbac.authorization.k8s.io\/kubegres-manager-rolebinding created\r\nclusterrolebinding.rbac.authorization.k8s.io\/kubegres-proxy-rolebinding created\r\nconfigmap\/kubegres-manager-config created\r\nservice\/kubegres-controller-manager-metrics-service created\r\ndeployment.apps\/kubegres-controller-manager created\r\n<\/pre>\n<p>Un namespace <em>kubegres-system<\/em> a \u00e9t\u00e9 cr\u00e9\u00e9, dans lequel notre cluster pourra \u00e9ventuellement s&#8217;inscrire. Cela dit, la bonne pratique consisterait \u00e0 cr\u00e9er un namespace \u00e0 part et laisser le controller-manager kubegres dans son namespace syst\u00e8me, mais pour ne pas interf\u00e9rer avec mes autres namespaces, je vais volontairement rajouter le cluster dedans:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl get all --namespace=kubegres-system\r\nNAME                                               READY   STATUS    RESTARTS   AGE\r\npod\/kubegres-controller-manager-794468bbff-bxzxk   2\/2     Running   0          3m35s\r\n\r\nNAME                                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\r\nservice\/kubegres-controller-manager-metrics-service   ClusterIP   10.100.182.92   &lt;none&gt;        8443\/TCP   3m35s\r\n\r\nNAME                                          READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/kubegres-controller-manager   1\/1     1            1           3m35s\r\n\r\nNAME                                                     DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/kubegres-controller-manager-794468bbff   1         1         1       3m35s\r\n<\/pre>\n<h2>Cr\u00e9ation du cluster en 15.2<\/h2>\n<p>Avant de cr\u00e9er le cluster PostgreSQL, il va falloir cr\u00e9er un secret qui va contenir les mots de passe du primaire et de sa standby. <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ vi postgres-secret.yaml\r\n(...)\r\napiVersion: v1\r\nkind: Secret\r\nmetadata:\r\n  name: mypostgres-secret\r\n  namespace: kubegres-system\r\ntype: Opaque\r\nstringData:\r\n  superUserPassword: capdata\r\n  replicationUserPassword: capdatarep\r\n\r\n$ kubectl apply -f postgres-secret.yaml -n kubegres-system\r\nsecret\/mypostgres-secret created\r\n<\/pre>\n<p>Enfin nous pouvons cr\u00e9er le cluster avec un seul fichier de d\u00e9ploiement:<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\r\napiVersion: kubegres.reactive-tech.io\/v1\r\nkind: Kubegres\r\nmetadata:\r\n  name: kpostgres\r\n  namespace: kubegres-system\r\n\r\nspec:\r\n\r\n  replicas: 2\r\n  image: postgres:15.2\r\n  port: 5432\r\n\r\n  database:\r\n    size: 200Mi\r\n    storageClassName: standard\r\n    volumeMount: \/var\/lib\/postgresql\/data\r\n\r\n  failover:\r\n    isDisabled: false\r\n    promotePod: &quot;kpostgres-2-0&quot;\r\n\r\n  resources:\r\n    limits:\r\n      memory: &quot;1Gi&quot;\r\n      cpu: &quot;1&quot;\r\n    requests:\r\n      memory: &quot;500Mi&quot;\r\n      cpu: &quot;0.5&quot;\r\n\r\n  probe:\r\n     livenessProbe:\r\n        exec:\r\n           command:\r\n             - sh\r\n             - -c\r\n             - exec pg_isready -U postgres -h $POD_IP\r\n        failureThreshold: 10\r\n        initialDelaySeconds: 60\r\n        periodSeconds: 20\r\n        successThreshold: 1\r\n        timeoutSeconds: 15\r\n\r\n     readinessProbe:\r\n        exec:\r\n           command:\r\n             - sh\r\n             - -c\r\n             - exec pg_isready -U postgres -h $POD_IP\r\n        failureThreshold: 3\r\n        initialDelaySeconds: 5\r\n        periodSeconds: 5\r\n        successThreshold: 1\r\n        timeoutSeconds: 3\r\n\r\n  env:\r\n    - name: POSTGRES_PASSWORD\r\n      valueFrom:\r\n        secretKeyRef:\r\n          name: mypostgres-secret\r\n          key: superUserPassword\r\n\r\n    - name: POSTGRES_REPLICATION_PASSWORD\r\n      valueFrom:\r\n        secretKeyRef:\r\n          name: mypostgres-secret\r\n          key: replicationUserPassword\r\n<\/pre>\n<p><strong>Remarques sur les sections <\/strong>:<br \/>\n&#8211; La spec database.size permet de dimensionner le PVC \u00e0 une taille initiale, mais attention ! En fonction de la version de Kubernetes, il peut \u00eatre compliqu\u00e9 de modifier cette valeur ensuite comme rapport\u00e9 dans <a href=\"https:\/\/github.com\/reactive-tech\/kubegres\/issues\/49\">cet issue du github Kubegres<\/a>.<br \/>\n&#8211; On ne cr\u00e9\u00e9 &#8216;que&#8217; 2 r\u00e9plicas au sens StatefulSet, c&#8217;est \u00e0 dire un primaire et une standby.<br \/>\n&#8211; La spec failover permet de d\u00e9sactiver ou d&#8217;activer le promote automatique de la standby, et de dire quel est le noeud pr\u00e9f\u00e9rentiel.<br \/>\n&#8211; 2 types de sondes : une pour d\u00e9cider de red\u00e9marrer le container en cas d&#8217;\u00e9chec de connexion (livenessProbe), et une autre pour autoriser les connexions (readinessProbe). Le fonctionnement du failover est expliqu\u00e9 plus loin.<br \/>\n&#8211; resources permet de limiter l&#8217;utilisation des pods \u00e0 des plages de CPU et RAM en jouant avec requests et limit.<br \/>\n&#8211; Il est aussi possible d&#8217;utiliser l&#8217;anti-affinit\u00e9 pour emp\u00eacher les pods primary et standby d&#8217;atterrir sur le m\u00eame node K8s. On ne le fait \u00e9videmment pas dans notre exemple car sur minikube cela n&#8217;aurait aucun sens. <\/p>\n<p>La liste compl\u00e8te des propri\u00e9t\u00e9s peut \u00eatre retrouv\u00e9e sur <a href=\"https:\/\/www.kubegres.io\/doc\/properties-explained.html\">la page de documentation <\/a>de Kubegres.<\/p>\n<p>Une fois le cluster d\u00e9ploy\u00e9, nous pouvons v\u00e9rifier tous les nouveaux objets qui ont \u00e9t\u00e9 cr\u00e9\u00e9s:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl apply -f postgres-cluster.yaml -n kubegres-system\r\nkubegres.kubegres.reactive-tech.io\/kpostgres created\r\n\r\n$ kubectl get all -n kubegres-system\r\nNAME                                               READY   STATUS    RESTARTS       AGE\r\npod\/kpostgres-1-0                                  1\/1     Running   2              5d16h\r\npod\/kpostgres-2-0                                  1\/1     Running   3              5d16h\r\npod\/kubegres-controller-manager-794468bbff-bxzxk   2\/2     Running   13 (23m ago)   5d18h\r\n\r\nNAME                                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\r\nservice\/kpostgres                                     ClusterIP   None            &lt;none&gt;        5432\/TCP   5d16h\r\nservice\/kpostgres-replica                             ClusterIP   None            &lt;none&gt;        5432\/TCP   5d16h\r\nservice\/kubegres-controller-manager-metrics-service   ClusterIP   10.100.182.92   &lt;none&gt;        8443\/TCP   5d18h\r\n\r\nNAME                                          READY   UP-TO-DATE   AVAILABLE   AGE\r\ndeployment.apps\/kubegres-controller-manager   1\/1     1            1           5d18h\r\n\r\nNAME                                                     DESIRED   CURRENT   READY   AGE\r\nreplicaset.apps\/kubegres-controller-manager-794468bbff   1         1         1       5d18h\r\n\r\nNAME                           READY   AGE\r\nstatefulset.apps\/kpostgres-1   1\/1     5d16h\r\nstatefulset.apps\/kpostgres-2   1\/1     5d16h\r\n<\/pre>\n<p>En plus du <em>controller manager<\/em>, nous avons donc 2 nouveaux pods, 2 services ClusterIP et 2 StatefulSets, un pour chaque instance PostgreSQL.<br \/>\nNous pouvons \u00e0 partir de l\u00e0 tester la connexion au service (kpostgres), en d\u00e9marrant un conteneur client <em>ephemeral<\/em> (kubectl run &#8230; &#8211;rm). On rappelle que kubegres ne cr\u00e9\u00e9 par d\u00e9faut que des <em>Headless Services <\/em>(ClusterIP sans adresse attribu\u00e9e) pour \u00e9viter de les rendre visibles depuis l&#8217;ext\u00e9rieur du cluster. On partira toujours du principe que sur un cluster Kubernetes, <strong>tout est deployment \/ pod<\/strong> :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' \\\r\n  --namespace kubegres-system  --image postgres:15.2 \\\r\n  --env=&quot;PGPASSWORD=capdata&quot; --command -- psql \\\r\n  --host kpostgres -U postgres -c &quot;select version();&quot;\r\n\r\n                    version\r\n\r\n--------------------------------------------------------------------------------\r\n---------------------------------------------\r\n PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by g\r\ncc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\r\n(1 row)\r\n\r\npod &quot;postgresql-postgresql-client&quot; deleted\r\n<\/pre>\n<h2>Comportement en cas d&#8217;\u00e9chec du serveur primaire<\/h2>\n<p>Le deploiement est configur\u00e9 avec 2 sondes (liveness et readiness) qui toutes 2 utilisent la commande pg_isready pour v\u00e9rifier la disponibilit\u00e9 des instances primaire et standby. D&#8217;ailleurs on peut le tester nous-m\u00eames pour v\u00e9fifier le retour de la commande :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl exec --tty --stdin -n kubegres-system kpostgres-1-0 -- pg_isready -U postgres &amp;&amp; echo $?\r\n\/var\/run\/postgresql:5432 - accepting connections\r\n0\r\n<\/pre>\n<p>Notre Failover est param\u00e9tr\u00e9 sur automatique, avec une pr\u00e9f\u00e9rence pour repartir sur le noeud 2, on va simuler une perte du service en faisant un pg_ctl stop sur le premier pod:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl exec --tty --stdin -n kubegres-system kpostgres-1-0 -- su postgres \\\r\n  -c &quot;pg_ctl stop -D \/var\/lib\/postgresql\/data\/pgdata -m fast&quot;                                                                          \r\nwaiting for server to shut down....command terminated with exit code 137\r\n<\/pre>\n<p>On voit que la reconnexion imm\u00e9diate ne fonctionne pas tout de suite: <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace kubegres-system  --image postgres:15.2 --env=&quot;PGPASSWORD=capdata&quot; --command -- psql --host kpostgres -U postgres -c &quot;select inet_server_addr();&quot;\r\n(...)\r\npsql: error: could not translate host name &quot;kpostgres&quot; to address: Temporary failure in name resolution\r\npod &quot;postgresql-postgresql-client&quot; deleted\r\npod kubegres-system\/postgresql-postgresql-client terminated (Error)\r\n<\/pre>\n<p>D&#8217;apr\u00e8s les logs le failover dure une dizaine de secondes seulement:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl logs pod\/kubegres-controller-manager-794468bbff-bxzxk -c manager -n kubegres-system --timestamps | grep 'FailOver:'\r\n2023-04-19T20:11:22.687346601Z 1.6819350826873376e+09   INFO    controllers.Kubegres    FailOver: Deleting the failing Primary StatefulSet.     {&quot;Primary name&quot;: &quot;kpostgres-1&quot;}\r\n2023-04-19T20:11:22.690961293Z 1.6819350826909401e+09   INFO    controllers.Kubegres    FailOver: Waiting before promoting a Replica to a Primary...    {&quot;Replica to promote&quot;: &quot;kpostgres-2&quot;}\r\n2023-04-19T20:11:33.712923151Z 1.6819350937128017e+09   INFO    controllers.Kubegres    FailOver: Promoting Replica to Primary. {&quot;Replica to promote&quot;: &quot;kpostgres-2&quot;}\r\n2023-04-19T20:11:33.713030040Z 1.6819350937129502e+09   DEBUG   events  Normal  {&quot;object&quot;: {&quot;kind&quot;:&quot;Kubegres&quot;,&quot;namespace&quot;:&quot;kubegres-system&quot;,&quot;name&quot;:&quot;kpostgres&quot;,&quot;uid&quot;:&quot;56c2a503-93f5-4bf8-8d0a-fe88cdaf2bde&quot;,&quot;apiVersion&quot;:&quot;kubegres.reactive-tech.io\/v1&quot;,&quot;resourceVersion&quot;:&quot;95195&quot;}, &quot;reason&quot;: &quot;FailOver&quot;, &quot;message&quot;: &quot;FailOver: Promoting Replica to Primary. 'Replica to promote': kpostgres-2&quot;}\r\n<\/pre>\n<p>La nouvelle connexion indique que l&#8217;on a chang\u00e9 de host:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl run postgresql-postgresql-client --rm --tty -i --restart='Never' --namespace kubegres-system  --image postgres:15.2 --env=&quot;PGPASSWORD=capdata&quot; --command -- psql --host kpostgres -U postgres -c &quot;select inet_server_addr();&quot;\r\n inet_server_addr\r\n------------------\r\n 172.17.0.6\r\n(1 row)\r\n\r\npod &quot;postgresql-postgresql-client&quot; deleted\r\n<\/pre>\n<p>Si on inventorie les objets, on voit qu&#8217;un nouveau Pod et StatefulSet ont \u00e9t\u00e9 cr\u00e9\u00e9s (AGE=43s):<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl get pods,svc,statefulset -n kubegres-system\r\nNAME                                               READY   STATUS    RESTARTS       AGE\r\npod\/kpostgres-2-0                                  1\/1     Running   0              85s\r\npod\/kpostgres-3-0                                  1\/1     Running   0              43s\r\npod\/kubegres-controller-manager-794468bbff-bxzxk   2\/2     Running   18 (41m ago)   6d7h\r\n\r\nNAME                                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE\r\nservice\/kpostgres                                     ClusterIP   None            &lt;none&gt;        5432\/TCP   6d5h\r\nservice\/kpostgres-replica                             ClusterIP   None            &lt;none&gt;        5432\/TCP   6d5h\r\nservice\/kubegres-controller-manager-metrics-service   ClusterIP   10.100.182.92   &lt;none&gt;        8443\/TCP   6d7h\r\n\r\nNAME                           READY   AGE\r\nstatefulset.apps\/kpostgres-2   1\/1     6d5h\r\nstatefulset.apps\/kpostgres-3   1\/1     43s\r\n<\/pre>\n<h2>Param\u00e9trer des sauvegardes<\/h2>\n<p>Pour pouvoir g\u00e9n\u00e9rer des sauvegardes de type pg_dumpall, il suffit de cr\u00e9er un PVC d\u00e9di\u00e9 et de rajouter une planification dans notre fichier de d\u00e9ploiement, qui \u00e0 son tour va cr\u00e9er une ressource de type CronJob: <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ vi postgres-backup.yaml\r\n(...)\r\napiVersion: v1\r\nkind: PersistentVolumeClaim\r\nmetadata:\r\n  name: my-backup-pvc\r\n  namespace: kubegres-system\r\nspec:\r\n  storageClassName: &quot;standard&quot;\r\n  accessModes:\r\n    - ReadWriteOnce\r\n  resources:\r\n    requests:\r\n      storage: 200Mi\r\n\r\n$ kubectl apply -f postgres-backup.yaml\r\npersistentvolumeclaim\/my-backup-pvc created\r\n\r\n$ kubectl get pvc -n kubegres-system\r\nNAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE\r\nmy-backup-pvc                Bound    pvc-b84656cd-904b-4a2b-89b1-e43ddb062be8   200Mi      RWO            standard       16s\r\npostgres-db-kpostgres-1-0    Bound    pvc-03f2e981-a1aa-446e-8f63-e5164a59f742   200Mi      RWO            standard       6d6h\r\npostgres-db-kpostgres-2-0    Bound    pvc-cbdf5810-e5e0-4950-8b06-5243f835be7d   200Mi      RWO            standard       6d6h\r\npostgres-db-kpostgres-3-0    Bound    pvc-dffef05b-f7b6-4618-802f-5fbfff2a0cba   200Mi      RWO            standard       37m\r\n<\/pre>\n<p>Le CronJob est alors en place, on a programm\u00e9 toutes les 5 minutes pour en voir passer un assez rapidement quand m\u00eame :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl get CronJob -n kubegres-system\r\nNAME               SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE\r\nbackup-kpostgres   *\/5 * * * *   False     0        72s             3m38s\r\n<\/pre>\n<p>Pour g\u00e9rer le backup, kubegres cr\u00e9\u00e9 un nouveau pod \u00e0 chaque fois, ce qui permet de contr\u00f4ler le bon fonctionnement des sauvegardes: <\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n$ kubectl get pods -n kubegres-system\r\nNAME                                           READY   STATUS      RESTARTS       AGE\r\nbackup-kpostgres-28032310-97jqx                0\/1     Completed   0              85s\r\nkpostgres-4-0                                  1\/1     Running     0              5m2s\r\nkpostgres-5-0                                  1\/1     Running     0              4m4s\r\nkubegres-controller-manager-794468bbff-bxzxk   2\/2     Running     18 (99m ago)   6d8h\r\n\r\n$ kubectl logs backup-kpostgres-28032310-97jqx -n kubegres-system\r\n19\/04\/2023 21:10:01 - Starting DB backup of Kubegres resource kpostgres into file: \/var\/lib\/backup\/kpostgres-backup-19_04_2023_21_10_01.gz\r\n19\/04\/2023 21:10:01 - Running: pg_dumpall -h kpostgres-replica -U postgres -c | gzip &gt; \/var\/lib\/backup\/kpostgres-backup-19_04_2023_21_10_01.\r\ngz\r\n19\/04\/2023 21:10:01 - DB backup completed for Kubegres resource kpostgres into file: \/var\/lib\/backup\/kpostgres-backup-19_04_2023_21_10_01.gz\r\n<\/pre>\n<h2>Conclusion<\/h2>\n<p>Kubegres apporte une grande aisance dans le d\u00e9ploiement de clusters PostgreSQL avec Streaming Replication. Il aurait \u00e9t\u00e9 beaucoup plus fastidieux ici de tout cr\u00e9er \u00e0 la main.<br \/>\nNous ne l&#8217;avons pas abord\u00e9 ici, mais il est aussi possible de customiser la configuration en r\u00e9alisant son propre ConfigMap, cr\u00e9er d&#8217;autres PVC pour g\u00e9rer les archives etc&#8230; Dans le prochain \u00e9pisode, nous le comparerons \u00e0 <a href=\"https:\/\/github.com\/CrunchyData\/postgres-operator\">PGO<\/a>, l&#8217;op\u00e9rateur concurrent de CrunchyDB.<\/p>\n<p>A bient\u00f4t et doucement sur les chocolats \ud83d\ude42 !<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9778&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9778&#038;title=Kubegres%20%3A%20l%E2%80%99op%C3%A9rateur%20Kubernetes%20cl%C3%A9%20en%20main%20pour%20PostgreSQL\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=Kubegres%20%3A%20l%E2%80%99op%C3%A9rateur%20Kubernetes%20cl%C3%A9%20en%20main%20pour%20PostgreSQL&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9778\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Hello \u00e0 toutes et tous ! Pour faire suite \u00e0 l&#8217;article d&#8217;Emmanuel sur l&#8217;installation de PostgreSQL sur un cluster minikube local, aujourd&#8217;hui nous allons d\u00e9couvrir l&#8217;op\u00e9rateur Kubegres qui permet de facilement d\u00e9ployer un cluster PostgreSQL avec Primary et Standby \u00e0&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":10070,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[383,442,266],"tags":[443,449],"class_list":["post-9778","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-container","category-devops","category-postgresql","tag-kubernetes","tag-streaming-replication"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Kubegres : l&#039;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL - Capdata TECH BLOG<\/title>\n<meta name=\"description\" content=\"Kubegres : l&#039;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Kubegres : l&#039;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Devops Devops\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Kubegres : l&#039;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Kubegres : l&#039;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Kubegres : l&#039;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Devops Devops\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2023-04-26T16:17:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/04\/2containers.png\" \/>\n\t<meta property=\"og:image:width\" content=\"623\" \/>\n\t<meta property=\"og:image:height\" content=\"416\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"David Baffaleuf\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"David Baffaleuf\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\"},\"author\":{\"name\":\"David Baffaleuf\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf\"},\"headline\":\"Kubegres : l&#8217;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL\",\"datePublished\":\"2023-04-26T16:17:16+00:00\",\"dateModified\":\"2023-04-26T16:17:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\"},\"wordCount\":2075,\"commentCount\":1,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"keywords\":[\"Kubernetes\",\"streaming replication\"],\"articleSection\":[\"Container\",\"Devops\",\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\",\"name\":\"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2023-04-26T16:17:16+00:00\",\"dateModified\":\"2023-04-26T16:17:16+00:00\",\"description\":\"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Devops Devops\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Kubegres : l&#8217;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf\",\"name\":\"David Baffaleuf\",\"sameAs\":[\"http:\/\/www.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/dbaffaleuf\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL - Capdata TECH BLOG","description":"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Devops Devops","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/","og_locale":"fr_FR","og_type":"article","og_title":"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL - Capdata TECH BLOG","og_description":"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Devops Devops","og_url":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2023-04-26T16:17:16+00:00","og_image":[{"width":623,"height":416,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/04\/2containers.png","type":"image\/png"}],"author":"David Baffaleuf","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"David Baffaleuf","Dur\u00e9e de lecture estim\u00e9e":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/"},"author":{"name":"David Baffaleuf","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf"},"headline":"Kubegres : l&#8217;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL","datePublished":"2023-04-26T16:17:16+00:00","dateModified":"2023-04-26T16:17:16+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/"},"wordCount":2075,"commentCount":1,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"keywords":["Kubernetes","streaming replication"],"articleSection":["Container","Devops","PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/","url":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/","name":"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2023-04-26T16:17:16+00:00","dateModified":"2023-04-26T16:17:16+00:00","description":"Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Kubegres : l'op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL Devops Devops","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/kubegres-loperateur-kubernetes-cle-en-main-pour-postgresql\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"Kubegres : l&#8217;op\u00e9rateur Kubernetes cl\u00e9 en main pour PostgreSQL"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/136297da9f61d6e4878abe0f48bc5fbf","name":"David Baffaleuf","sameAs":["http:\/\/www.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/dbaffaleuf\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/9778","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=9778"}],"version-history":[{"count":14,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/9778\/revisions"}],"predecessor-version":[{"id":10077,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/9778\/revisions\/10077"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/10070"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=9778"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=9778"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=9778"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}