{"id":9753,"date":"2023-03-29T15:42:35","date_gmt":"2023-03-29T14:42:35","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=9753"},"modified":"2023-03-31T13:44:25","modified_gmt":"2023-03-31T12:44:25","slug":"postgresql-sur-la-solution-kubernetes-locale-minikube","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/","title":{"rendered":"PostgreSQL sur la solution Kubernetes locale Minikube"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9753&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9753&#038;title=PostgreSQL%20sur%20la%20solution%20Kubernetes%20locale%20Minikube\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%20sur%20la%20solution%20Kubernetes%20locale%20Minikube&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9753\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><p><img loading=\"lazy\" decoding=\"async\" class=\" wp-image-9756\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/istockphoto-486570435-612x612-1-300x200.jpg\" alt=\"\" width=\"490\" height=\"326\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/istockphoto-486570435-612x612-1-300x200.jpg 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/istockphoto-486570435-612x612-1.jpg 612w\" sizes=\"auto, (max-width: 490px) 100vw, 490px\" \/><\/p>\n<p>Hello<\/p>\n<p>Il y a quelques temps, je vous avais pr\u00e9sent\u00e9 un premier article sur l&#8217;installation d&#8217;une instance de base de donn\u00e9es PostgreSQL sous Docker. C&#8217;est <a href=\"https:\/\/blog.capdata.fr\/index.php\/containeriser-une-base-de-donnees-postgresql-avec-docker\/\">cet article<\/a> qui nous a permis de mettre un premier pas dans le monde de la containerisation de services.<\/p>\n<p>L&#8217;article d&#8217;aujourd&#8217;hui reprend les m\u00eames concepts, \u00e0 savoir, comment installer et configurer PostgreSQL sous <a href=\"https:\/\/minikube.sigs.k8s.io\/docs\/\">minikube<\/a> et <a href=\"https:\/\/kubernetes.io\/fr\/\">Kubernetes<\/a>.<\/p>\n<h2>Pr\u00e9sentation de l&#8217;environnement Kubernetes<\/h2>\n<p>Avant de pr\u00e9senter Minikube, il nous faut parler de Kubernetes.<\/p>\n<p>Kubernetes est un outil d&#8217;orchestration de conteneurs. En d&#8217;autres termes, celui ci permet de g\u00e9rer des d\u00e9ploiements d&#8217;applications directement via une plate forme open-source.<br \/>\nPour le fonctionnement, il nous faut un environnement virtualis\u00e9 avec l&#8217;hyperviseur qui communique avec les couches applicatives pr\u00e9compil\u00e9es embarquant leurs librairies. Ce sont donc ces couches applicatives que l&#8217;on appelle containers et qui peuvent \u00eatre utilis\u00e9es sur des serveurs &#8220;on premise&#8221; ou dans un service Cloud.<\/p>\n<p>C&#8217;est autour de ce concept que Minikube s&#8217;est cr\u00e9\u00e9. Cet outil utilise les fonctionnalit\u00e9s propres \u00e0 Kubernetes mais assure un d\u00e9ploiement sur un noeud unique.<\/p>\n<p>Il vous est donc possible de profiter d&#8217;un \u00e9co-syst\u00e8me Kubernetes complet sur un simple PC de bureau (avec une configuration RAM\/Cpu exigeante) .<\/p>\n<p>L&#8217;objectif de cet article est donc de d\u00e9ployer une instance PostgreSQL sur Minikube.<\/p>\n<h2>Pr\u00e9requis propres \u00e0 AWS<\/h2>\n<p>Si, comme moi, vous utilisez des VMs EC2 AWS, il y a quelques informations \u00e0 conna\u00eetre.<\/p>\n<p>Tout d&#8217;abord, il faut savoir que AWS met \u00e0 disposition un service nomm\u00e9 EKS, <a href=\"https:\/\/aws.amazon.com\/fr\/eks\/\">Elastic Kubernetes Service<\/a>, pour la gestion de clusters directement int\u00e9gr\u00e9 dans AWS.<br \/>\nIl n&#8217;est donc pas n\u00e9cessaire de configurer manuellement Kubernetes via des commandes &#8220;kubectl&#8221;. De plus les mises \u00e0 jour des outils sont automatis\u00e9es.<\/p>\n<p>AWS met \u00e9galement \u00e0 disposition <a href=\"https:\/\/aws.amazon.com\/fr\/fargate\/\">AWS Fargate<\/a>. Un service qui permet de configurer des containers, sans se soucier des types de serveurs \u00e0 mettre \u00e0 disposition. L&#8217;utilisateur de cette solution ne voit donc que le cot\u00e9 application, ses besoins en terme de scalabilit\u00e9, et AWS fait le reste.<\/p>\n<p>Mais pour notre article, comme nous souhaitons utiliser Minikube, il nous faut une VM supportant les exigences de la virtualisation. Or sous AWS, ce sont les instances EC2 de type &#8220;bare metal&#8221; qui r\u00e9pondent \u00e0 ce besoin. Attention donc \u00e0 regarder ce point, et surtout prendre en consid\u00e9ration la partie facturation qui est loin d&#8217;\u00eatre n\u00e9gligeable<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-9755 size-full\" src=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/ec2_metal.jpg\" alt=\"\" width=\"1356\" height=\"606\" srcset=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/ec2_metal.jpg 1356w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/ec2_metal-300x135.jpg 300w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/ec2_metal-1024x458.jpg 1024w, https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/ec2_metal-768x343.jpg 768w\" sizes=\"auto, (max-width: 1356px) 100vw, 1356px\" \/><\/p>\n<p>Pour notre exemple, nous partirons sur une instance EC2 &#8220;c5.metal&#8221;. Ce type d&#8217;instance permet la virtualisation.<\/p>\n<h2>Les diff\u00e9rentes \u00e9tapes d&#8217;installation<\/h2>\n<p>Il conviendra de s&#8217;assurer que les CPU de notre instance acceptent la virtualisation. Passer les commandes ci apr\u00e8s<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># egrep -q 'vmx|svm' \/proc\/cpuinfo  &quot;&amp;&amp;&quot; echo yes || echo no\r\nyes<\/pre>\n<p>ou<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># grep -E &quot;vmx|svm&quot; \/proc\/cpuinfo\r\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities\r\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear flush_l1d arch_capabilities\r\n.........<\/pre>\n<p>Le syst\u00e8me d&#8217;exploitation choisi pour ce serveur est un Rocky Linux 8.7 , un fork Red Hat avec lequel nous pourrons g\u00e9rer nos packages d&#8217;installation via yum.<br \/>\nPlusieurs packages sont \u00e0 int\u00e9grer afin de faire fonctionner notre cluster mono n\u0153ud.<\/p>\n<h3>Installer la couche KVM<\/h3>\n<p>Afin de pouvoir installer Minikube, il nous faut un gestionnaire de container.<br \/>\nPour l&#8217;article propre \u00e0 Docker, nous avions utilis\u00e9 le Docker Engine, qui a la particularit\u00e9 de pouvoir se passer de la partie Virtualisation et d&#8217;hyperviseur.<\/p>\n<p>Pour notre exemple, l&#8217;id\u00e9e est d&#8217;utiliser KVM (Kernel-linux Virtual Machine), qui n\u00e9cessite une virtualisation active.<br \/>\nInstaller les packages suivants sur le serveur<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># yum update\r\n# yum install qemu-kvm libvirt libguestfs-tools virt-install<\/pre>\n<p>&nbsp;<\/p>\n<p>La librairie de virtualisation doit \u00eatre activ\u00e9e et le service d\u00e9marr\u00e9 automatiquement<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># systemctl enable libvirtd.service\r\n# systemctl start libvirtd.service<\/pre>\n<h3>Installer helm et kubectl<\/h3>\n<p>Ces outils sont n\u00e9cessaires \u00e0 l&#8217;administration de notre cluster, et l&#8217;installation d&#8217;applications. Helm nous sert \u00e0 installer PostgreSQL depuis un repository, et kubectl est un interpr\u00e9teur de commandes pour\u00a0 notre cluster.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># curl -LO &quot;https:\/\/dl.k8s.io\/release\/$(curl -L -s https:\/\/dl.k8s.io\/release\/stable.txt)\/bin\/linux\/amd64\/kubectl&quot;\r\n% Total % Received % Xferd Average Speed Time Time Time Current\r\nDload Upload Total Spent Left Speed\r\n100 138 100 138 0 0 1179 0 --:--:-- --:--:-- --:--:-- 1179\r\n100 45.8M 100 45.8M 0 0 70.1M 0 --:--:-- --:--:-- --:--:-- 117M<\/pre>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># curl -fsSL -o get_helm.sh https:\/\/raw.githubusercontent.com\/helm\/helm\/main\/scripts\/get-helm-3\r\n\r\n# chmod 700 get_helm.sh\r\n# .\/get_helm.sh\r\nDownloading https:\/\/get.helm.sh\/helm-v3.11.2-linux-amd64.tar.gz\r\nVerifying checksum... Done.\r\nPreparing to install helm into \/usr\/local\/bin\r\nhelm installed into \/usr\/local\/bin\/helm<\/pre>\n<p>Installer ces binaires dans &#8220;\/usr\/local\/bin&#8221;. Attention, par la suite, votre variable PATH doit contenir le chemin vers ce r\u00e9pertoire.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># install kubectl \/usr\/local\/bin\/kubectl\r\n# install helm \/usr\/local\/bin\/helm\r\n# chmod 755 \/usr\/local\/bin\/helm\r\n# chmod 755 \/usr\/local\/bin\/kubectl<\/pre>\n<p>Valider l&#8217;installation de helm et kubectl.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> # helm version\r\nversion.BuildInfo{\r\nVersion:&quot;v3.11.2&quot;, \r\nGitCommit:&quot;912ebc1cd10d38d340f048efaf0abda047c3468e&quot;, \r\nGitTreeState:&quot;clean&quot;, \r\nGoVersion:&quot;go1.18.10&quot;\r\n}<\/pre>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># kubectl version -o json\r\n{\r\n&quot;clientVersion&quot;: {\r\n&quot;major&quot;: &quot;1&quot;,\r\n&quot;minor&quot;: &quot;26&quot;,\r\n&quot;gitVersion&quot;: &quot;v1.26.2&quot;,\r\n&quot;gitCommit&quot;: &quot;fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b&quot;,\r\n&quot;gitTreeState&quot;: &quot;clean&quot;,\r\n&quot;buildDate&quot;: &quot;2023-02-22T13:39:03Z&quot;,\r\n&quot;goVersion&quot;: &quot;go1.19.6&quot;,\r\n&quot;compiler&quot;: &quot;gc&quot;,\r\n&quot;platform&quot;: &quot;linux\/amd64&quot;\r\n},\r\n&quot;kustomizeVersion&quot;: &quot;v4.5.7&quot;\r\n}<\/pre>\n<h3 tabindex=\"0\">Installation de minikube<\/h3>\n<p>T\u00e9l\u00e9charger et installer le binaire Minikube, puis le placer dans\u00a0le r\u00e9pertoire &#8220;\/usr\/local\/bin&#8221;.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># curl -LO https:\/\/storage.googleapis.com\/minikube\/releases\/latest\/minikube-linux-amd64\r\n# install minikube-linux-amd64 \/usr\/local\/bin\/minikube<\/pre>\n<p>Attention, minikube doit fonctionner avec un utilisateur linux d\u00e9di\u00e9, autre que &#8220;root&#8221;. Il convient donc de cr\u00e9er un utilisateur appartenant aux 2 groupes &#8220;libvirt&#8221; et &#8220;qemu&#8221;.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># useradd -u 1001 -g qemu -G qemu,libvirt manu\r\n# passwd manu<\/pre>\n<p>Se connecter avec ce nouvel utilisateur et v\u00e9rifier ses groupes.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"># su - manu\r\n[manu@ ~]$ id\r\nuid=1001(manu) gid=107(qemu) groups=107(qemu),986(libvirt) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023<\/pre>\n<h2>Gestion du cluster Minikube<\/h2>\n<p>Une fois les packages install\u00e9s il nous faut d\u00e9marrer Minikube avec notre compte linux d\u00e9di\u00e9.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@~$ minikube start --driver=kvm2\r\n* minikube v1.29.0 on Rocky 8.7\r\n* Using the kvm2 driver based on user configuration\r\n* Downloading driver docker-machine-driver-kvm2:\r\n docker-machine-driver-kvm2-...: 65 B \/ 65 B [---------] 100.00% ? p\/s 0s\r\n docker-machine-driver-kvm2-...: 12.30 MiB \/ 12.30 MiB 100.00% 13.05 MiB\r\n* Downloading VM boot image ...\r\n minikube-v1.29.0-amd64.iso....: 65 B \/ 65 B [---------] 100.00% ? p\/s 0s\r\n minikube-v1.29.0-amd64.iso: 276.35 MiB \/ 276.35 MiB 100.00% 167.79 MiB\r\n* Starting control plane node minikube in cluster minikube\r\n* Downloading Kubernetes v1.26.1 preload ...\r\n preloaded-images-k8s-v18-v1...: 397.05 MiB \/ 397.05 MiB 100.00% 111.49\r\n* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...\r\n* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...\r\n- Generating certificates and keys ...\r\n- Booting up control plane ...\r\n- Configuring RBAC rules ...\r\n* Configuring bridge CNI (Container Networking Interface) ...\r\n- Using image gcr.io\/k8s-minikube\/storage-provisioner:v5\r\n* Verifying Kubernetes components...\r\n* Enabled addons: storage-provisioner, default-storageclass\r\n* Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default<\/pre>\n<p>Au premier d\u00e9marrage, Minikube cherche\u00a0 les diff\u00e9rentes images dont il a besoin.<br \/>\nLe d\u00e9marrage suivant donnera ce r\u00e9sultat :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~$ minikube start\r\n* minikube v1.29.0 on Rocky 8.7\r\n* Using the kvm2 driver based on existing profile\r\n* Starting control plane node minikube in cluster minikube\r\n* Restarting existing kvm2 VM for &quot;minikube&quot; ...\r\n* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...\r\n* Configuring bridge CNI (Container Networking Interface) ...\r\n* Verifying Kubernetes components...\r\n- Using image gcr.io\/k8s-minikube\/storage-provisioner:v5\r\n- Using image docker.io\/kubernetesui\/dashboard:v2.7.0\r\n- Using image docker.io\/kubernetesui\/metrics-scraper:v1.0.8\r\n* Some dashboard features require the metrics-server addon. To enable all features please run:\r\n\r\nminikube addons enable metrics-server\r\n\r\n\r\n* Enabled addons: storage-provisioner, dashboard, default-storageclass\r\n* Done! kubectl is now configured to use &quot;minikube&quot; cluster and &quot;default&quot; namespace by default<\/pre>\n<p>Minikube est maintenant actif sur notre serveur.<\/p>\n<p>V\u00e9rifier l&#8217;\u00e9tat de la couche virtualisation.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~ $ sudo virsh net-list --all\r\nName        State  Autostart Persistent\r\n------------------------------------------------\r\ndefault     active yes       yes\r\nmk-minikube active yes       yes<\/pre>\n<p>Valider notre cluster avec &#8220;kubctl&#8221;<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~$ kubectl get nodes\r\nNAME      STATUS ROLES         AGE  VERSION\r\nminikube  Ready  control-plane 35m  v1.26.1<\/pre>\n<p>Pour arr\u00eater le cluster<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~$ minikube stop\r\n* Stopping node &quot;minikube&quot; ...\r\n* 1 node stopped.<\/pre>\n<h2>D\u00e9ploiement g\u00e9n\u00e9rique de PostgreSQL sur Minikube<\/h2>\n<p>L&#8217;outil helm permet de d\u00e9ployer simplement une instance PostgreSQL sur notre cluster mono-n\u0153ud Minikube.<br \/>\nPour cela, utilisons le repository &#8220;bitnami&#8221; via le site : https:\/\/charts.bitnami.com\/bitnami.<\/p>\n<p>Le fonctionnement s&#8217;apparente &#8220;un peu&#8221; \u00e0 yum pour un syst\u00e8me RedHat, il s&#8217;agit d&#8217;une source vers laquelle chercher pour enregistrer des applications \u00e0 containeriser.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@ ~$ helm repo add bitnami https:\/\/charts.bitnami.com\/bitnami\r\n&quot;bitnami&quot; has been added to your repositories<\/pre>\n<p>Une fois le repository initier, v\u00e9rifier les versions PostgreSQL disponibles.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~ $ helm search repo postgres\r\nNAME                         CHART VERSION   APP VERSION            DESCRIPTION\r\nbitnami\/postgresql           12.2.5          15.2.0                 PostgreSQL (Postgres) is an open source object-...\r\nbitnami\/postgresql-ha        11.1.6          15.2.0                 This PostgreSQL cluster solution includes the P...\r\nbitnami\/supabase             0.1.4           0.23.2                 Supabase is an open source Firebase alternative...\r\n<\/pre>\n<p>Le repository bitnami nous propose la derni\u00e8re version 15.2 de PostgreSQL. Nous allons l&#8217;installer pour notre environnement minikube.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~ $ helm install postgres bitnami\/postgresql\r\nNAME: postgres\r\nLAST DEPLOYED: Mon Mar 20 15:41:00 2023\r\nNAMESPACE: default\r\nSTATUS: deployed\r\nREVISION: 1\r\nTEST SUITE: None\r\nNOTES:\r\nCHART NAME: postgresql\r\nCHART VERSION: 12.2.5\r\nAPP VERSION: 15.2.0\r\n\r\n** Please be patient while the chart is being deployed **\r\n\r\nPostgreSQL can be accessed via port 5432 on the following DNS names from within your cluster:\r\n\r\npostgres-postgresql.default.svc.cluster.local - Read\/Write connection\r\n\r\nTo get the password for &quot;postgres&quot; run:\r\n\r\nexport POSTGRES_PASSWORD=$(kubectl get secret --namespace default postgres-postgresql -o jsonpath=&quot;{.data.postgres-password}&quot; | base64 -d)\r\n\r\nTo connect to your database run the following command:\r\n\r\nkubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io\/bitnami\/postgresql:15.2.0-debian-11-r13 --env=&quot;PGPASSWORD=$POSTGRES_PASSWORD&quot; \\\r\n--command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432\r\n\r\n NOTE: If you access the container using bash, make sure that you execute &quot;\/opt\/bitnami\/scripts\/postgresql\/entrypoint.sh \/bin\/bash&quot; in order to avoid the error &quot;psql: local user with ID 1001} does not exist&quot;\r\n\r\nTo connect to your database from outside the cluster execute the following commands:\r\n\r\nkubectl port-forward --namespace default svc\/postgres-postgresql 5432:5432 &quot;&amp;&quot; PGPASSWORD=&quot;$POSTGRES_PASSWORD&quot; psql --host 127.0.0.1 -U postgres -d postgres -p 5432\r\n\r\nWARNING: The configured password will be ignored on new installation in case when previous Posgresql release was deleted through the helm command. In that case, old PVC will have an old password, and setting it through helm won't take effect. Deleting persistent volumes (PVs) will solve the issue.<\/pre>\n<p>V\u00e9rifier le d\u00e9ploiement dans le repository<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~ $ helm list\r\nNAME     NAMESPACE REVISION UPDATED                                 STATUS   CHART             APP VERSION\r\npostgres default   1        2023-03-20 15:41:00.677760646 +0000 UTC deployed postgresql-12.2.5 15.2.0\r\n<\/pre>\n<p>Notre application containeris\u00e9e PostgreSQL est bien d\u00e9ploy\u00e9e dans le repository.<\/p>\n<p>V\u00e9rifier son installation dans le cluster minikube et voir les diff\u00e9rents services pr\u00e9sents :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~ $ kubectl get deployment,pods,svc\r\nNAME                                                READY STATUS  RESTARTS AGE\r\npod\/postgres-postgresql-0                           1\/1   Running 0        18m\r\n\r\nNAME                           TYPE       CLUSTER-IP    EXTERNAL-IP  PORT(S)  AGE\r\nservice\/kubernetes             ClusterIP  10.96.0.1     none         443\/TCP  36m\r\nservice\/postgres-postgresql    ClusterIP  10.109.83.132 none         5432\/TCP 18m\r\nservice\/postgres-postgresql-hl ClusterIP  None          none         5432\/TCP 18m<\/pre>\n<p>Nous remarquons que notre application PostgreSQL est enregistr\u00e9e sous l&#8217;adresse IP\u00a0 10.109.83.132. Le port de communication est 5432.<br \/>\nC&#8217;est cette adresse qui sera r\u00e9f\u00e9renc\u00e9e comme VIP pour notre cluster.<\/p>\n<h3>Connexion \u00e0 l&#8217;instance PostgreSQL<\/h3>\n<p>Lors de l&#8217;installation, les instructions de connexion nous ont \u00e9t\u00e9 donn\u00e9es (notamment la gestion du password de connexion).<\/p>\n<p>Pour retrouver le password postgres, c&#8217;est l&#8217;outil &#8220;kubectl&#8221; qui est appel\u00e9.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~ $ kubectl get secret --namespace default postgres-postgresql -o jsonpath=&quot;{.data.postgres-password}&quot; | base64 -d\r\nHvbCO9Co5R<\/pre>\n<p>Enregistrer cette valeur dans une variable que l&#8217;on peut nommer PGPASS par exemple.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~$ export PGPASS=$(kubectl get secret --namespace default postgres-postgresql -o jsonpath=&quot;{.data.postgres-password}&quot; | base64 --decode)<\/pre>\n<p>Lancer la commande, indiqu\u00e9e lors de l&#8217;installation de PostgreSQL avec &#8220;helm&#8221;, pour se connecter \u00e0 l&#8217;instance.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@~$ kubectl run postgres-postgresql-client --rm --tty -i --restart='Never' --namespace default --image docker.io\/bitnami\/postgresql:15.2.0-debian-11-r13 --env=&quot;PGPASSWORD=$PGPASS&quot; --command -- psql --host postgres-postgresql -U postgres -d postgres -p 5432\r\nIf you don't see a command prompt, try pressing enter.\r\n\r\npostgres=# \\l+\r\n                                                     List of databases\r\nName       | Owner    | Encoding | Collate     | Ctype       | Access privileges     | Size    | Tablespace | Description\r\n-----------+----------+----------+-------------+-------------+-----------------------+---------+------------+--------------------------------------------\r\npostgres   | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7453 kB | pg_default | default administrative connection database\r\ntemplate0  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c\/postgres          +| 7297 kB | pg_default | unmodifiable empty database\r\n           |          |          |             |             | postgres=CTc\/postgres |         |            |\r\ntemplate1  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c\/postgres          +| 7525 kB | pg_default | default template for new databases\r\n           |          |          |             |             | postgres=CTc\/postgres |         |            |\r\n(3 rows)\r\n\r\npostgres=# select version();\r\nversion\r\n---------------------------------------------------------------------------------------------------\r\nPostgreSQL 15.2 on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\r\n(1 row)<\/pre>\n<p>Pour se connecter \u00e0 cette instance PostgreSQL, nous avons lanc\u00e9 une application &#8220;PostgreSQL cliente&#8221; d\u00e9ploy\u00e9e \u00e0 partir d&#8217;une image Docker &#8216;docker.io\/bitnami\/postgresql:15.2.0-debian-11-r13&#8217;\u00a0 en PostgreSQL version 15.2 compil\u00e9e sous un Debian 11.13. Cette application une fois d\u00e9ploy\u00e9e, nous permet d&#8217;ex\u00e9cuter l&#8217;outil &#8220;psql&#8221; pour se connecter.<\/p>\n<p>A noter qu&#8217;\u00e0 la premi\u00e8re ex\u00e9cution, cette image est enregistr\u00e9e dans le node minikube<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@ ~$ kubectl get pods -o wide\r\nNAME                        READY  STATUS    RESTARTS    AGE IP          NODE     NOMINATED NODE READINESS GATES\r\npostgres-postgresql-0       1\/1    Running   2 (25m ago) 24h 10.244.0.17 minikube none      none \r\npostgres-postgresql-client  0\/1    Completed 0           16m 10.244.0.19 minikube none      none <\/pre>\n<p>Rep\u00e9rer le pod &#8220;postgres-postgresql-client&#8221;, le status est \u00e0 &#8220;completed&#8221; car nous n&#8217;avons plus de connexion active. D&#8217;ailleurs, le &#8220;READY&#8221; est \u00e0 0\/1 car aucune connexion.<br \/>\nIl est tout \u00e0 fait possible de retirer cette application publi\u00e9e dans les pods de minikube. Pour cela, lancer la commande :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@ ~$ kubectl delete pod postgres-postgresql-client\r\npod &quot;postgres-postgresql-client&quot; deleted<\/pre>\n<p>V\u00e9rifier:<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@ ~$ kubectl get pods -o wide\r\nNAME                   READY STATUS   RESTARTS    AGE IP           NODE      NOMINATED NODE READINESS GATES\r\npostgres-postgresql-0  1\/1   Running  2 (25m ago) 24h 10.244.0.17  minikube  none           none<\/pre>\n<p>&nbsp;<\/p>\n<p>Nous venons donc d&#8217;installer une instance PostgreSQL par d\u00e9faut gr\u00e2ce \u00e0 l&#8217;outil helm dans notre cluster Minikube.<\/p>\n<h2>Configuration d&#8217;instance et volume persist\u00e9 sur Minikube<\/h2>\n<p>Afin de pouvoir conserver vos donn\u00e9es sur disque, il est possible de monter une instance PostgreSQL avec, ce que l&#8217;on appelle, un &#8220;volume persist\u00e9&#8221;.<br \/>\nIl faut, pour cela, d\u00e9dier un FileSystem pour les donn\u00e9es PostgreSQL sur la VM locale ex\u00e9cutant Minikube.<\/p>\n<p>De plus,\u00a0 PostgreSQL peut \u00eatre cr\u00e9\u00e9 avec des valeurs de configuration diff\u00e9rentes que ce qui est propos\u00e9 avec l&#8217;outil helm.<\/p>\n<p>L&#8217;int\u00e9r\u00eat est de monter une instance pr\u00e9-configur\u00e9e, pour une application m\u00e9tier, d\u00e8s le d\u00e9marrage.<\/p>\n<h2>Les fichiers YAML<\/h2>\n<p>Pour le d\u00e9ploiement d&#8217;une instance PostgreSQL sp\u00e9cifique, nous devons utiliser des fichiers de configuration &#8220;yaml&#8221; que nous chargerons dans le cluster via &#8220;kubectl&#8221;.<\/p>\n<h4>fichier YAML de configuration d&#8217;instance<\/h4>\n<p>Un fichier &#8220;configmap&#8221; doit \u00eatre cr\u00e9\u00e9 si l&#8217;on souhaite sp\u00e9cifier des informations sur les credentials (user\/password), et\/ou base de donn\u00e9es m\u00e9tier.<\/p>\n<p>Utiliser pour cela le fichier yaml suivant pour cr\u00e9er un utilisateur &#8220;<strong>capdata<\/strong>&#8220;, avec une base &#8220;<strong>capddb<\/strong>&#8221; pour l&#8217;application &#8220;postgres&#8221;.<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">apiVersion: v1\r\nkind: ConfigMap\r\nmetadata:\r\n  name: pg-capdata\r\n  labels:\r\n  app: postgres\r\ndata:\r\n  POSTGRES_DB: capdb\r\n  POSTGRES_USER: capdata\r\n  POSTGRES_PASSWORD: passcapdata2023<\/pre>\n<p>Appliquer ce fichier \u00e0 votre configuration Kubernenetes Minikube, puis valider celle ci<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@~$ kubectl apply -f pg-configmap.yaml\r\nconfigmap\/pg-capdata created\r\n\r\nmanu@ ~$ kubectl get configmap\r\nNAME              DATA AGE\r\nkube-root-ca.crt  1    3d23h\r\npg-capdata        3    4s<\/pre>\n<h4>fichier YAML pour les volumes<\/h4>\n<p>Choisir un FileSystem d\u00e9di\u00e9 sur le serveur, et cr\u00e9er le r\u00e9pertoire pour accueillir l&#8217;instance PostgreSQL.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@ ~$ df -h \/data\r\nFilesystem     Size Used Avail Use% Mounted on\r\n\/dev\/nvme1n1p1 19G  28K  18G   1%   \/data\r\n\r\nmanu@ ~$ mkdir -p \/data\/postgresql<\/pre>\n<p>Nous aurons besoin de 2 fichiers yaml pour la configuration des volumes, 1 pour le volume persist\u00e9 qui nous permet de conserver nos donn\u00e9es d&#8217;instance durant le cycle de vie de celle ci. De plus, il nous faut, ce que l&#8217;on appelle, un &#8220;Persistent Volume Claim&#8221;. C&#8217;est une vue logique du volume g\u00e9r\u00e9 par le cluster Kubernetes.<\/p>\n<p>Les 2 fichiers contiennent les entr\u00e9es suivantes :<\/p>\n<p>pg-data-pvc.yaml<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">apiVersion: v1\r\nkind: PersistentVolume \r\nmetadata:\r\n  name: pg-data \r\n  labels:\r\n    type: local \r\n    app: postgres\r\nspec:\r\n  storageClassName: manual\r\n  capacity:\r\n    storage: 8Gi \r\n  accessModes:\r\n    - ReadWriteMany\r\n  hostPath:\r\n    path: &quot;\/data\/postgresql&quot; <\/pre>\n<p>pg-data-pvc.yaml<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">apiVersion: v1\r\nkind: PersistentVolumeClaim \r\nmetadata:\r\n  name: pg-data-claim\r\n  labels:\r\n    app: postgres \r\nspec:\r\n  storageClassName: manual\r\n  accessModes:\r\n    - ReadWriteMany\r\n  resources:\r\n    requests:\r\n     storage: 8Gi<\/pre>\n<p>Charger ces 2 fichiers yaml dans Kubernetes.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@ ~$ kubectl apply -f pg-data.yaml\r\n\r\nmanu@ ~$ kubectl apply -f pg-data-pvc.yaml<\/pre>\n<p>V\u00e9rifier les informations dans le cluster<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">manu@ ~$ kubectl get pv -o wide\r\nNAME     CAPACITY     ACCESS MODES     RECLAIM POLICY STATUS  CLAIM STORAGECLASS     REASON  AGE    VOLUMEMODE\r\npg-data  8Gi          RWX              Retain         Bound   default\/pg-data-claim  manual  2m50s  Filesystem\r\n\r\n\r\nmanu@ ~$ kubectl get pvc -o wide\r\nNAME                  STATUS  VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS  AGE   VOLUMEMODE\r\npg-data-claim         Bound   pg-data  8Gi        RWX            manual        94s   Filesystem<\/pre>\n<h3><\/h3>\n<h4>Fichier YAML de d\u00e9ploiement<\/h4>\n<p>Apr\u00e8s avoir d\u00e9clarer la configuration, les volumes, nous pouvons d\u00e9ployer l&#8217;instance dans le cluster Kubernetes. Par exemple, nous d\u00e9clarons 2 r\u00e9pliquas d&#8217;une instance PostgreSQL 15.2, \u00e9coutant sur le port 5432.<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">apiVersion: apps\/v1\r\nkind: Deployment \r\nmetadata:\r\n  name: postgres\r\nspec:\r\n  replicas: 2 \r\n  selector:\r\n    matchLabels:\r\n      app: postgres\r\n  template:\r\n    metadata:\r\n      labels:\r\n        app: postgres\r\n   spec:\r\n      containers:\r\n        - name: postgres\r\n          image: postgres:15.2 \r\n          imagePullPolicy: &quot;IfNotPresent&quot;\r\n          ports:\r\n            - containerPort: 5432\r\n          envFrom:\r\n            - configMapRef:\r\n               name: pg-capdata \r\n          volumeMounts:\r\n            - mountPath: \/var\/lib\/postgresql\/data\r\n               name: postgresdata\r\n       volumes:\r\n        - name: postgresdata\r\n            persistentVolumeClaim:\r\n              claimName: pg-data-claim<\/pre>\n<p>Appliquer ce fichier yaml et v\u00e9rifier que le status &#8220;Running&#8221; apparaisse sur les 2 r\u00e9pliquas PostgreSQL. On appelle &#8220;pod&#8221; sur Kubernetes, un container applicatif avec ses librairies embarqu\u00e9es.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@ ~$ kubectl apply -f pg-deploiement.yaml\r\ndeployment.apps\/postgres-deploy created\r\n\r\nmanu@ ~$ kubectl get pods,deployments -o wide\r\nNAME                             READY   STATUS   RESTARTS    AGE    IP            NODE      NOMINATED NODE   READINESS GATES\r\npod\/postgres-66855ddfc5-drsqj    1\/1     Running  0           83s    10.244.0.3    minikube  none             none           \r\npod\/postgres-66855ddfc5-n9fc8    1\/1     Running  1 (71s ago) 83s    10.244.0.4    minikube  none             none\r\n\r\n\r\nNAME                       READY   UP-TO-DATE   AVAILABLE     AGE  CONTAINERS    IMAGES         SELECTOR\r\ndeployment.apps\/postgres   2\/2     2            2             83s  postgres      postgres:15.2  app=postgres<\/pre>\n<p>Sur le node nomm\u00e9 &#8220;Minikube&#8221;, nous voyons donc tourner nos 2 r\u00e9pliquas PostgreSQL avec leurs IP en 10.244.0.***<\/p>\n<h4>Fichier yaml de service PostgreSQL<\/h4>\n<p>Afin de pouvoir se connecter \u00e0 notre instance PostgreSQL, il faut lui d\u00e9finir un service. C&#8217;est une sorte de porte d&#8217;entr\u00e9e pour acc\u00e9der \u00e0 notre instance.<\/p>\n<pre class=\"brush: yaml; title: ; notranslate\" title=\"\">apiVersion: v1\r\nkind: Service\r\nmetadata:\r\n  name: postgres-capdata\r\n  labels:\r\n   app: postgres \r\nspec:\r\n   type: NodePort \r\n  ports:\r\n   - port: 5432\r\n  selector:\r\n    app: postgres<\/pre>\n<p>Appliquer ce fichier yaml sur le cluster.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@ ~$ kubectl apply -f pg-service.yaml service\/postgres-capdata created[\/yaml]\r\n\r\n\r\n\r\n\r\nmanu@ ~$ kubectl get svc -o wide\r\nNAME              TYPE          CLUSTER-IP    EXTERNAL-IP            PORT(S)         AGE   SELECTOR\r\nkubernetes        ClusterIP     10.96.0.1     none                   443\/TCP          16d  none\r\npostgres          NodePort      10.110.166.18 none                   5432:32581\/TCP  6m48s app=postgres<\/pre>\n<h3>Connexion \u00e0 PostgreSQL<\/h3>\n<p>2 m\u00e9thodes pour nous connecter \u00e0 l&#8217;instance PostgreSQL s&#8217;offrent \u00e0 nous.<\/p>\n<ul>\n<li>Connexion via Kubernetes<\/li>\n<\/ul>\n<p>Utiliser l&#8217;outil &#8220;kubectl&#8221; pour se connecter \u00e0 l&#8217;instance PostgreSQL. Utiliser le compte pr\u00e9alablement d\u00e9fini, &#8220;<strong>capdata<\/strong>&#8220;, sur la base &#8220;<strong>capdb<\/strong>&#8220;.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@ ~$ kubectl exec -it postgres-66855ddfc5-drsqj -- psql -h localhost -U capdata --password -p 5432 capdb\r\nPassword:\r\npsql (15.2 (Debian 15.2-1.pgdg110+1))\r\nType &quot;help&quot; for help.\r\n\r\ncapdb=# \\conninfo\r\nYou are connected to database &quot;capdb&quot; as user &quot;capdata&quot; on host &quot;localhost&quot; (address &quot;127.0.0.1&quot;) at port &quot;5432&quot;.\r\ncapdb=#<\/pre>\n<ul>\n<li>Connexion via un client PostgreSQL local \u00e0 la VM.<\/li>\n<\/ul>\n<p>Sur notre VM Rocky Linux, nous disposons d&#8217;une vieille version de &#8220;psql&#8221; que nous pouvons utiliser pour la connexion (version 10 de PostgreSQL).<\/p>\n<p>Rep\u00e9rer auparavant, la valeur du port de sortie que nous avions trouv\u00e9 lorsque nous avons pass\u00e9 la commande &#8220;kubectl get svc -o wide&#8221;.<\/p>\n<p>Pour notre service d\u00e9di\u00e9 \u00e0 postgreSQL, nous avons comme port &#8220;5432:32581&#8221;. Le 5432 est le port d&#8217;\u00e9coute local \u00e0 Kubernetes. Pour acc\u00e8der depuis notre VM, c&#8217;est le port 32581 dont nous aurons besoin.<\/p>\n<p>De plus, l&#8217;IP de notre cluster Kubernetes, doit \u00eatre connue. Pour cela , ex\u00e9cuter cette commande :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@ip-172-44-3-198 ~]$ kubectl get pods --all-namespaces -o wide\r\nNAMESPACE    NAME                              READY STATUS   RESTARTS    AGE   IP              NODE      NOMINATED NODE READINESS GATES\r\ndefault      postgres-66855ddfc5-drsqj         1\/1   Running  0           17m   10.244.0.3      minikube  none           none\r\ndefault      postgres-66855ddfc5-n9fc8         1\/1   Running  1(17m ago)  17m   10.244.0.4      minikube  none           none\r\nkube-system  coredns-787d4945fb-87kzd          1\/1   Running  0           23m   10.244.0.2      minikube  none           none\r\nkube-system  etcd-minikube                     1\/1   Running  0           23m   192.168.39.227  minikube  none           none\r\nkube-system  kube-apiserver-minikube           1\/1   Running  0           23m   192.168.39.227  minikube  none           none \r\nkube-system  kube-controller-manager-minikube  1\/1   Running  0           23m   192.168.39.227  minikube  none           none\r\nkube-system  kube-proxy-lptq2                  1\/1   Running  0           23m   192.168.39.227  minikube  none           none\r\nkube-system  kube-scheduler-minikube           1\/1   Running  0           23m   192.168.39.227  minikube  none           none\r\nkube-system  storage-provisioner               1\/1   Running  1 (22m ago) 23m   192.168.39.227  minikube  none           none\r\n<\/pre>\n<p>L&#8217;IP du cluster Kubernetes est 192.168.39.227.<\/p>\n<p>La connexion via PSQL se fait donc sur cette IP. Valider la connexion sur la base &#8220;capdb&#8221;.<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">manu@ ~$ psql -h 192.168.39.227 -U capdata -p 32581 capdb\r\nPassword for user capdata:\r\npsql (10.23, server 15.2 (Debian 15.2-1.pgdg110+1))\r\nWARNING: psql major version 10, server major version 15.\r\nSome psql features might not work.\r\nType &quot;help&quot; for help.\r\n\r\ncapdb=# \\conninfo\r\nYou are connected to database &quot;capdb&quot; as user &quot;capdata&quot; on host &quot;192.168.39.227&quot; at port &quot;32581&quot;.\r\ncapdb=# \\l+\r\nList of databases\r\nName       | Owner   | Encoding | Collate    | Ctype      | Access privileges   | Size    | Tablespace | Description\r\n-----------+---------+----------+------------+------------+---------------------+---------+------------+--------------------------------------------\r\ncapdb      | capdata | UTF8     | en_US.utf8 | en_US.utf8 |                     | 7453 kB | pg_default |\r\npostgres   | capdata | UTF8     | en_US.utf8 | en_US.utf8 |                     | 7453 kB | pg_default | default administrative connection database\r\ntemplate0  | capdata | UTF8     | en_US.utf8 | en_US.utf8 |         =c\/capdata +| 7297 kB | pg_default | unmodifiable empty database\r\n           |         |          |            |            | capdata=CTc\/capdata |         |            |\r\ntemplate1  | capdata | UTF8     | en_US.utf8 | en_US.utf8 |         =c\/capdata +| 7525 kB | pg_default | default template for new databases\r\n           |         |          |            |            | capdata=CTc\/capdata |         |            |\r\n(4 rows)<\/pre>\n<p>Le compte &#8220;capdata&#8221; que nous avons inscrit dans le fichier configmap est owner pour toutes les bases de cette instance.<\/p>\n<p>&nbsp;<\/p>\n<h2>Les logs<\/h2>\n<p>Il est possible d&#8217;aller regarder les logs de nos &#8220;pods&#8221; d\u00e9ploy\u00e9s sur Kubernetes.<br \/>\nPour cela choisir la commande &#8220;kubectl logs&#8221;.<\/p>\n<p>Par exemple, lancer cette commande sur 1 des r\u00e9pliquas PostgreSQL du cluster, et vous avec acc\u00e8s en lecture au fichier &#8220;postgresql.log&#8221; de l&#8217;instance<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\"> manu@ ~$ kubectl logs pod\/postgres-66855ddfc5-drsqj\r\nThe files belonging to this database system will be owned by user &quot;postgres&quot;.\r\nThis user must also own the server process.\r\n\r\nThe database cluster will be initialized with locale &quot;en_US.utf8&quot;.\r\nThe default database encoding has accordingly been set to &quot;UTF8&quot;.\r\nThe default text search configuration will be set to &quot;english&quot;.\r\n\r\nData page checksums are disabled.\r\n\r\nfixing permissions on existing directory \/var\/lib\/postgresql\/data ... ok\r\ncreating subdirectories ... ok\r\nselecting dynamic shared memory implementation ... posix\r\nselecting default max_connections ... 100\r\nselecting default shared_buffers ... 128MB\r\nselecting default time zone ... Etc\/UTC\r\ncreating configuration files ... ok\r\nrunning bootstrap script ... ok\r\nperforming post-bootstrap initialization ... ok\r\nsyncing data to disk ... ok\r\n\r\ninitdb: warning: enabling &quot;trust&quot; authentication for local connections\r\ninitdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.\r\n\r\nSuccess. You can now start the database server using:\r\n\r\npg_ctl -D \/var\/lib\/postgresql\/data -l logfile start\r\n\r\npg_ctl: another server might be running; trying to start server anyway\r\nwaiting for server to start....2023-03-27 09:30:32.071 UTC [48] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\r\n2023-03-27 09:30:32.075 UTC [48] LOG: listening on Unix socket &quot;\/var\/run\/postgresql\/.s.PGSQL.5432&quot;\r\n2023-03-27 09:30:32.087 UTC [51] LOG: database system was interrupted; last known up at 2023-03-27 09:30:32 UTC\r\n2023-03-27 09:30:32.110 UTC [51] LOG: database system was not properly shut down; automatic recovery in progress\r\n2023-03-27 09:30:32.113 UTC [51] LOG: invalid record length at 0\/14FE0E0: wanted 24, got 0\r\n2023-03-27 09:30:32.113 UTC [51] LOG: redo is not required\r\n2023-03-27 09:30:32.118 UTC [49] LOG: checkpoint starting: end-of-recovery immediate wait\r\n2023-03-27 09:30:32.140 UTC [49] LOG: checkpoint complete: wrote 3 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.006 s, sync=0.003 s, total=0.024 s; sync files=2, longest=0.002 s, average=0.002 s; distance=0 kB, estimate=0 kB\r\n2023-03-27 09:30:32.144 UTC [48] LOG: database system is ready to accept connections\r\ndone\r\nserver started\r\nCREATE DATABASE\r\n\r\n\/usr\/local\/bin\/docker-entrypoint.sh: ignoring \/docker-entrypoint-initdb.d\/*\r\n\r\n2023-03-27 09:30:32.262 UTC [48] LOG: received fast shutdown request\r\nwaiting for server to shut down....2023-03-27 09:30:32.266 UTC [48] LOG: aborting any active transactions\r\n2023-03-27 09:30:32.268 UTC [48] LOG: background worker &quot;logical replication launcher&quot; (PID 54) exited with exit code 1\r\n2023-03-27 09:30:32.268 UTC [49] LOG: shutting down\r\n2023-03-27 09:30:32.271 UTC [49] LOG: checkpoint starting: shutdown immediate\r\n2023-03-27 09:30:32.341 UTC [49] LOG: checkpoint complete: wrote 916 buffers (5.6%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.013 s, sync=0.045 s, total=0.073 s; sync files=249, longest=0.036 s, average=0.001 s; distance=4217 kB, estimate=4217 kB\r\n2023-03-27 09:30:32.345 UTC [48] LOG: database system is shut down\r\ndone\r\nserver stopped\r\n\r\nPostgreSQL init process complete; ready for start up.\r\n\r\n2023-03-27 09:30:32.384 UTC [1] LOG: starting PostgreSQL 15.2 (Debian 15.2-1.pgdg110+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit\r\n2023-03-27 09:30:32.385 UTC [1] LOG: listening on IPv4 address &quot;0.0.0.0&quot;, port 5432\r\n2023-03-27 09:30:32.385 UTC [1] LOG: listening on IPv6 address &quot;::&quot;, port 5432\r\n2023-03-27 09:30:32.392 UTC [1] LOG: listening on Unix socket &quot;\/var\/run\/postgresql\/.s.PGSQL.5432&quot;\r\n2023-03-27 09:30:32.401 UTC [64] LOG: database system was shut down at 2023-03-27 09:30:32 UTC\r\n2023-03-27 09:30:32.406 UTC [1] LOG: database system is ready to accept connections<\/pre>\n<p>Nous obtenons les informations de la cr\u00e9ation de l&#8217;instance jusqu&#8217;au dernier d\u00e9marrage.<\/p>\n<p>N&#8217;h\u00e9sitez pas \u00e0 laisser un message !<\/p>\n<p>Emmanuel RAMI.<\/p>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9753&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9753&#038;title=PostgreSQL%20sur%20la%20solution%20Kubernetes%20locale%20Minikube\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%20sur%20la%20solution%20Kubernetes%20locale%20Minikube&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F9753\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Hello Il y a quelques temps, je vous avais pr\u00e9sent\u00e9 un premier article sur l&#8217;installation d&#8217;une instance de base de donn\u00e9es PostgreSQL sous Docker. C&#8217;est cet article qui nous a permis de mettre un premier pas dans le monde de&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":32,"featured_media":9754,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[383,266],"tags":[21,443,444],"class_list":["post-9753","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-container","category-postgresql","tag-cluster","tag-kubernetes","tag-minikube"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>PostgreSQL sur la solution Kubernetes locale Minikube PostgreSQL minikube<\/title>\n<meta name=\"description\" content=\"Installer PostgreSQL sous un cluster Kubernetes minikube\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PostgreSQL sur la solution Kubernetes locale Minikube PostgreSQL minikube\" \/>\n<meta property=\"og:description\" content=\"Installer PostgreSQL sous un cluster Kubernetes minikube\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2023-03-29T14:42:35+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-03-31T12:44:25+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/cubes.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"488\" \/>\n\t<meta property=\"og:image:height\" content=\"426\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Emmanuel RAMI\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Emmanuel RAMI\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"22 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\"},\"author\":{\"name\":\"Emmanuel RAMI\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\"},\"headline\":\"PostgreSQL sur la solution Kubernetes locale Minikube\",\"datePublished\":\"2023-03-29T14:42:35+00:00\",\"dateModified\":\"2023-03-31T12:44:25+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\"},\"wordCount\":4626,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"keywords\":[\"cluster\",\"Kubernetes\",\"minikube\"],\"articleSection\":[\"Container\",\"PostgreSQL\"],\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\",\"name\":\"PostgreSQL sur la solution Kubernetes locale Minikube PostgreSQL minikube\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2023-03-29T14:42:35+00:00\",\"dateModified\":\"2023-03-31T12:44:25+00:00\",\"description\":\"Installer PostgreSQL sous un cluster Kubernetes minikube\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PostgreSQL sur la solution Kubernetes locale Minikube\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae\",\"name\":\"Emmanuel RAMI\",\"sameAs\":[\"https:\/\/blog.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"PostgreSQL sur la solution Kubernetes locale Minikube PostgreSQL minikube","description":"Installer PostgreSQL sous un cluster Kubernetes minikube","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/","og_locale":"fr_FR","og_type":"article","og_title":"PostgreSQL sur la solution Kubernetes locale Minikube PostgreSQL minikube","og_description":"Installer PostgreSQL sous un cluster Kubernetes minikube","og_url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2023-03-29T14:42:35+00:00","article_modified_time":"2023-03-31T12:44:25+00:00","og_image":[{"width":488,"height":426,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/03\/cubes.jpg","type":"image\/jpeg"}],"author":"Emmanuel RAMI","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Emmanuel RAMI","Dur\u00e9e de lecture estim\u00e9e":"22 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/"},"author":{"name":"Emmanuel RAMI","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae"},"headline":"PostgreSQL sur la solution Kubernetes locale Minikube","datePublished":"2023-03-29T14:42:35+00:00","dateModified":"2023-03-31T12:44:25+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/"},"wordCount":4626,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"keywords":["cluster","Kubernetes","minikube"],"articleSection":["Container","PostgreSQL"],"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/","url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/","name":"PostgreSQL sur la solution Kubernetes locale Minikube PostgreSQL minikube","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2023-03-29T14:42:35+00:00","dateModified":"2023-03-31T12:44:25+00:00","description":"Installer PostgreSQL sous un cluster Kubernetes minikube","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-sur-la-solution-kubernetes-locale-minikube\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"PostgreSQL sur la solution Kubernetes locale Minikube"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/797b9b6698fa35f7ce3e9a70a8b102ae","name":"Emmanuel RAMI","sameAs":["https:\/\/blog.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/erami\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/9753","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=9753"}],"version-history":[{"count":47,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/9753\/revisions"}],"predecessor-version":[{"id":9983,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/9753\/revisions\/9983"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/9754"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=9753"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=9753"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=9753"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}