{"id":8438,"date":"2021-01-15T15:01:30","date_gmt":"2021-01-15T14:01:30","guid":{"rendered":"https:\/\/blog.capdata.fr\/?p=8438"},"modified":"2021-01-15T15:01:30","modified_gmt":"2021-01-15T14:01:30","slug":"postgresql-benchmarking","status":"publish","type":"post","link":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/","title":{"rendered":"PostgreSQL Benchmarking"},"content":{"rendered":"<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8438&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8438&#038;title=PostgreSQL%20Benchmarking\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%20Benchmarking&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8438\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a><h1>Benchmarking on PostgreSQL<\/h1>\n<p>Cet article fait echo \u00e0 la s\u00e9rie d&#8217;articles sur les benchmarks r\u00e9alis\u00e9s sur les PaaS PostgreSQL de diff\u00e9rents cloud provider.<\/p>\n<p>Je vais donc vous pr\u00e9senter ici une m\u00e9thode pour r\u00e9aliser assez simplement un benchmark de type TPC-C sur une BDD PostgreSQL (facilement adaptable \u00e0 d&#8217;autres SGBD vu que la m\u00e9thode de connexion utilis\u00e9e ici est bas\u00e9e sur jdbc).<\/p>\n<p>(tout ceci est d\u00e9crit dans le wiki : <a href=\"https:\/\/github.com\/Capdata\/benchmarksql\/wiki\">https:\/\/github.com\/Capdata\/benchmarksql\/wiki<\/a> et dans le README)<\/p>\n<h1>Pr\u00e9paration de l&#8217;environnement<\/h1>\n<p>(Est exclu ici l&#8217;installation de PostgreSQL)<\/p>\n<h2>R\u00e9cup\u00e9ration des sources<\/h2>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@capdata:~$ git clone https:\/\/github.com\/Capdata\/benchmarksql.git\r\nCloning into 'benchmarksql'...\r\nremote: Enumerating objects: 42, done.\r\nremote: Counting objects: 100% (42\/42), done.\r\nremote: Compressing objects: 100% (35\/35), done.\r\nremote: Total 524 (delta 9), reused 30 (delta 5), pack-reused 482\r\nReceiving objects: 100% (524\/524), 6.14 MiB | 1.23 MiB\/s, done.\r\nResolving deltas: 100% (230\/230), done.\r\n<\/pre>\n<p>Partant de l\u00e0 soit on compile directement depuis les sources, soit on r\u00e9cup\u00e8re les sources d\u00e9j\u00e0 compil\u00e9es.<\/p>\n<h3>Compilation<\/h3>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@capdata:~\/benchmarksql$ ant\r\nBuildfile: \/var\/lib\/postgresql\/benchmarksql\/build.xml\r\n\r\ninit:\r\n[mkdir] Created dir: \/var\/lib\/postgresql\/benchmarksql\/build\r\n\r\ncompile:\r\n[javac] Compiling 13 source files to \/var\/lib\/postgresql\/benchmarksql\/build\r\n\r\ndist:\r\n[mkdir] Created dir: \/var\/lib\/postgresql\/benchmarksql\/dist\r\n[jar] Building jar: \/var\/lib\/postgresql\/benchmarksql\/dist\/BenchmarkSQL-5.0.jar\r\n\r\nBUILD SUCCESSFUL\r\nTotal time: 4 seconds\r\n<\/pre>\n<h3>R\u00e9cup\u00e9ration du package pr\u00e9-compil\u00e9<\/h3>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@capdata:~$ cd benchmarksql\/run\/\r\npostgres@capdata:~\/benchmarksql\/run$ wget https:\/\/github.com\/Capdata\/benchmarksql\/releases\/download\/v5.1\/BenchmarkSQL-5.1.jar\r\n--2020-12-18 15:43:12-- https:\/\/github.com\/Capdata\/benchmarksql\/releases\/download\/v5.1\/BenchmarkSQL-5.1.jar\r\nResolving github.com (github.com)... 140.82.121.3\r\nConnecting to github.com (github.com)|140.82.121.3|:443... connected.\r\nHTTP request sent, awaiting response... 302 Found\r\nLocation: https:\/\/github-production-release-asset-2e65be.s3.amazonaws.com\/311352109\/cee1a380-4099-11eb-9e6d-b7c7d2a45a5e?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;amp;X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201218%2Fus-east-1%2Fs3%2Faws4_request&amp;amp;X-Amz-Date=20201218T144314Z&amp;amp;X-Amz-Expires=300&amp;amp;X-Amz-Signature=76b47533e7e988c887c13e7044fa2ccfede086ccad7a9acc2c3d721f3c65b7f5&amp;amp;X-Amz-SignedHeaders=host&amp;amp;actor_id=0&amp;amp;key_id=0&amp;amp;repo_id=311352109&amp;amp;response-content-disposition=attachment%3B%20filename%3DBenchmarkSQL-5.1.jar&amp;amp;response-content-type=application%2Foctet-stream [following]\r\n--2020-12-18 15:43:12-- https:\/\/github-production-release-asset-2e65be.s3.amazonaws.com\/311352109\/cee1a380-4099-11eb-9e6d-b7c7d2a45a5e?X-Amz-Algorithm=AWS4-HMAC-SHA256&amp;amp;X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201218%2Fus-east-1%2Fs3%2Faws4_request&amp;amp;X-Amz-Date=20201218T144314Z&amp;amp;X-Amz-Expires=300&amp;amp;X-Amz-Signature=76b47533e7e988c887c13e7044fa2ccfede086ccad7a9acc2c3d721f3c65b7f5&amp;amp;X-Amz-SignedHeaders=host&amp;amp;actor_id=0&amp;amp;key_id=0&amp;amp;repo_id=311352109&amp;amp;response-content-disposition=attachment%3B%20filename%3DBenchmarkSQL-5.1.jar&amp;amp;response-content-type=application%2Foctet-stream\r\nResolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.111.196\r\nConnecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.111.196|:443... connected.\r\nHTTP request sent, awaiting response... 200 OK\r\nLength: 67398 (66K) [application\/octet-stream]\r\nSaving to: \u2018BenchmarkSQL-5.1.jar\u2019\r\n\r\nBenchmarkSQL-5.1.jar 100%[==================================================================================&amp;gt;] 65.82K 292KB\/s in 0.2s\r\n\r\n2020-12-18 15:43:13 (292 KB\/s) - \u2018BenchmarkSQL-5.1.jar\u2019 saved [67398\/67398]\r\n<\/pre>\n<h2>Pr\u00e9paration du SGBD<\/h2>\n<p>On commence donc par v\u00e9rifier que tout fonctionne correctement :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@capdata:~$ ps -ef | grep postgres\r\npostgres 918 1 0 12:15 ? 00:00:00 \/usr\/lib\/postgresql\/12\/bin\/postgres -D \/var\/lib\/postgresql\/12\/main -c config_file=\/etc\/postgresql\/12\/main\/postgresql.conf\r\npostgres 996 918 0 12:15 ? 00:00:00 postgres: 12\/main: checkpointer\r\npostgres 997 918 0 12:15 ? 00:00:00 postgres: 12\/main: background writer\r\npostgres 998 918 0 12:15 ? 00:00:00 postgres: 12\/main: walwriter\r\npostgres 999 918 0 12:15 ? 00:00:00 postgres: 12\/main: autovacuum launcher\r\npostgres 1000 918 0 12:15 ? 00:00:00 postgres: 12\/main: stats collector\r\npostgres 1001 918 0 12:15 ? 00:00:00 postgres: 12\/main: logical replication launcher\r\nroot 37426 2242 0 15:06 pts\/0 00:00:00 su - postgres\r\npostgres 37427 37426 0 15:06 pts\/0 00:00:00 -bash\r\npostgres 37444 37427 0 15:06 pts\/0 00:00:00 ps -ef\r\npostgres 37445 37427 0 15:06 pts\/0 00:00:00 grep postgres\r\n<\/pre>\n<p>On se connecte ensuite au SGBD et on cr\u00e9e une BDD et un user pour les besoins du Bench :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@capdata:~\/benchmarksql\/run$ psql\r\npsql (12.5 (Ubuntu 12.5-0ubuntu0.20.04.1))\r\nType &quot;help&quot; for help.\r\n\r\npostgres=# create user tpcc password 'mysecuredpassword';\r\nCREATE ROLE\r\npostgres=# create database benchmark owner tpcc;\r\nCREATE DATABASE\r\npostgres=#\r\n<\/pre>\n<h2>Param\u00e9trage du bench<\/h2>\n<p>Il faut alors modifier le fichier de configuration pour &#8220;coller&#8221; \u00e0 sa propre configuration, on commence par copier l&#8217;exemple :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~$ cd benchmarksql\/run\/\r\npostgres@osboxes:~\/benchmarksql\/run$ ls\r\nBenchmarkSQL-5.1.jar generateReport.sh props.fb props.pg runDatabaseDestroy.sh sql.common sql.oracle\r\nfuncs.sh log4j.properties props.mysql runBenchmark.sh runLoader.sh sql.firebird sql.postgres\r\ngenerateGraphs.sh misc props.ora runDatabaseBuild.sh runSQL.sh sql.mysql\r\npostgres@osboxes:~\/benchmarksql\/run$ cp props.pg my_props.pg\r\n<\/pre>\n<p>Puis on l&#8217;\u00e9dite pour changer les informations importantes, tout d&#8217;abord les \u00e9l\u00e9ments de connexion \u00e0 la BDD, exemple :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\ndb=postgres                                        # Type de SGBD\r\ndriver=org.postgresql.Driver                       # Driver jdbc utilis\u00e9\r\nconn=jdbc:postgresql:\/\/192.168.56.2:5432\/benchmark    # L'url jdbc de connexion\r\nSSL=true                                           # Si SSL est utilis\u00e9 ou pas\r\nuser=tpcc                                          # Username\r\npassword=mysecuredpassword                         # Password\r\n<\/pre>\n<p>Puis on configure le &#8220;profil de charge&#8221; :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\nwarehouses=200     # Nombre d'entrepots, va d\u00e9terminer la taille de la BDD finale, compter 1Go \/ entrepot\r\nloadWorkers=4      # Nombre de process en \/\/ pour charger la BDD (ne sert que pour le chargement)\r\n\r\nterminals=100      # Nombre de &quot;clients&quot; simultan\u00e9s \r\n\r\nrunTxnsPerTerminal=0  # On choisi alors soit un nombre de Transaction \/ Client (terminal)\r\nrunMins=20            # Ou une dur\u00e9e du benchmark (seul un des 2 param\u00e8tre doit avoir une valeur &gt; 0)\r\nlimitTxnsPerMin=0     # On peut \u00e9galement limiter le nombre de transaction \/ minute\r\n<\/pre>\n<h2>Chargement des donn\u00e9es<\/h2>\n<p>On peut alors lancer la cr\u00e9ation des structures et le chargement des donn\u00e9es :<\/p>\n<pre class=\"brush: sql; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run$ .\/runDatabaseBuild.sh my_props.pg\r\n# ------------------------------------------------------------\r\n# Loading SQL file .\/sql.common\/tableCreates.sql\r\n# ------------------------------------------------------------\r\ncreate table bmsql_config (\r\ncfg_name    varchar(30) primary key,\r\ncfg_value   varchar(50)\r\n);\r\ncreate table bmsql_warehouse (\r\nw_id        integer   not null,\r\nw_ytd       decimal(12,2),\r\nw_tax       decimal(4,4),\r\nw_name      varchar(10),\r\nw_street_1  varchar(20),\r\nw_street_2  varchar(20),\r\nw_city      varchar(20),\r\nw_state     char(2),\r\nw_zip       char(9)\r\n);\r\ncreate table bmsql_district (\r\nd_w_id       integer       not null,\r\nd_id         integer       not null,\r\nd_ytd        decimal(12,2),\r\nd_tax        decimal(4,4),\r\nd_next_o_id  integer,\r\nd_name       varchar(10),\r\nd_street_1   varchar(20),\r\nd_street_2   varchar(20),\r\nd_city       varchar(20),\r\nd_state      char(2),\r\nd_zip        char(9)\r\n);\r\ncreate table bmsql_customer (\r\nc_w_id         integer        not null,\r\nc_d_id         integer        not null,\r\nc_id           integer        not null,\r\nc_discount     decimal(4,4),\r\nc_credit       char(2),\r\nc_last         varchar(16),\r\nc_first        varchar(16),\r\nc_credit_lim   decimal(12,2),\r\nc_balance      decimal(12,2),\r\nc_ytd_payment  decimal(12,2),\r\nc_payment_cnt  integer,\r\nc_delivery_cnt integer,\r\nc_street_1     varchar(20),\r\nc_street_2     varchar(20),\r\nc_city         varchar(20),\r\nc_state        char(2),\r\nc_zip          char(9),\r\nc_phone        char(16),\r\nc_since        timestamp,\r\nc_middle       char(2),\r\nc_data         varchar(500)\r\n);\r\ncreate sequence bmsql_hist_id_seq;\r\ncreate table bmsql_history (\r\nhist_id  integer,\r\nh_c_id   integer,\r\nh_c_d_id integer,\r\nh_c_w_id integer,\r\nh_d_id   integer,\r\nh_w_id   integer,\r\nh_date   timestamp,\r\nh_amount decimal(6,2),\r\nh_data   varchar(24)\r\n);\r\ncreate table bmsql_new_order (\r\nno_w_id  integer   not null,\r\nno_d_id  integer   not null,\r\nno_o_id  integer   not null\r\n);\r\ncreate table bmsql_oorder (\r\no_w_id       integer      not null,\r\no_d_id       integer      not null,\r\no_id         integer      not null,\r\no_c_id       integer,\r\no_carrier_id integer,\r\no_ol_cnt     integer,\r\no_all_local  integer,\r\no_entry_d    timestamp\r\n);\r\ncreate table bmsql_order_line (\r\nol_w_id         integer   not null,\r\nol_d_id         integer   not null,\r\nol_o_id         integer   not null,\r\nol_number       integer   not null,\r\nol_i_id         integer   not null,\r\nol_delivery_d   timestamp,\r\nol_amount       decimal(6,2),\r\nol_supply_w_id  integer,\r\nol_quantity     integer,\r\nol_dist_info    char(24)\r\n);\r\ncreate table bmsql_item (\r\ni_id     integer      not null,\r\ni_name   varchar(24),\r\ni_price  decimal(5,2),\r\ni_data   varchar(50),\r\ni_im_id  integer\r\n);\r\ncreate table bmsql_stock (\r\ns_w_id       integer       not null,\r\ns_i_id       integer       not null,\r\ns_quantity   integer,\r\ns_ytd        integer,\r\ns_order_cnt  integer,\r\ns_remote_cnt integer,\r\ns_data       varchar(50),\r\ns_dist_01    char(24),\r\ns_dist_02    char(24),\r\ns_dist_03    char(24),\r\ns_dist_04    char(24),\r\ns_dist_05    char(24),\r\ns_dist_06    char(24),\r\ns_dist_07    char(24),\r\ns_dist_08    char(24),\r\ns_dist_09    char(24),\r\ns_dist_10    char(24)\r\n);\r\nStarting BenchmarkSQL LoadData\r\n\r\ndriver=org.postgresql.Driver\r\nconn=jdbc:postgresql:\/\/192.168.56.2:5432\/benchmark\r\nuser=tpcc\r\npassword=***********\r\nwarehouses=200\r\nloadWorkers=4\r\nfileLocation (not defined)\r\ncsvNullValue (not defined - using default 'NULL')\r\n\r\nWorker 000: Loading ITEM\r\nWorker 001: Loading Warehouse      1\r\nWorker 003: Loading Warehouse      2\r\nWorker 002: Loading Warehouse      3\r\n..........\r\n..........\r\nWorker 000: Loading Warehouse    199 done\r\nWorker 003: Loading Warehouse    200 done\r\n# ------------------------------------------------------------\r\n# Loading SQL file .\/sql.common\/indexCreates.sql\r\n# ------------------------------------------------------------\r\nalter table bmsql_warehouse add constraint bmsql_warehouse_pkey\r\nprimary key (w_id);\r\nalter table bmsql_district add constraint bmsql_district_pkey\r\nprimary key (d_w_id, d_id);\r\nalter table bmsql_customer add constraint bmsql_customer_pkey\r\nprimary key (c_w_id, c_d_id, c_id);\r\ncreate index bmsql_customer_idx1\r\non  bmsql_customer (c_w_id, c_d_id, c_last, c_first);\r\nalter table bmsql_oorder add constraint bmsql_oorder_pkey\r\nprimary key (o_w_id, o_d_id, o_id);\r\ncreate unique index bmsql_oorder_idx1\r\non  bmsql_oorder (o_w_id, o_d_id, o_carrier_id, o_id);\r\nalter table bmsql_new_order add constraint bmsql_new_order_pkey\r\nprimary key (no_w_id, no_d_id, no_o_id);\r\nalter table bmsql_order_line add constraint bmsql_order_line_pkey\r\nprimary key (ol_w_id, ol_d_id, ol_o_id, ol_number);\r\nalter table bmsql_stock add constraint bmsql_stock_pkey\r\nprimary key (s_w_id, s_i_id);\r\nalter table bmsql_item add constraint bmsql_item_pkey\r\nprimary key (i_id);\r\n# ------------------------------------------------------------\r\n# Loading SQL file .\/sql.common\/foreignKeys.sql\r\n# ------------------------------------------------------------\r\nalter table bmsql_district add constraint d_warehouse_fkey\r\nforeign key (d_w_id)\r\nreferences bmsql_warehouse (w_id);\r\nalter table bmsql_customer add constraint c_district_fkey\r\nforeign key (c_w_id, c_d_id)\r\nreferences bmsql_district (d_w_id, d_id);\r\nalter table bmsql_history add constraint h_customer_fkey\r\nforeign key (h_c_w_id, h_c_d_id, h_c_id)\r\nreferences bmsql_customer (c_w_id, c_d_id, c_id);\r\nalter table bmsql_history add constraint h_district_fkey\r\nforeign key (h_w_id, h_d_id)\r\nreferences bmsql_district (d_w_id, d_id);\r\nalter table bmsql_new_order add constraint no_order_fkey\r\nforeign key (no_w_id, no_d_id, no_o_id)\r\nreferences bmsql_oorder (o_w_id, o_d_id, o_id);\r\nalter table bmsql_oorder add constraint o_customer_fkey\r\nforeign key (o_w_id, o_d_id, o_c_id)\r\nreferences bmsql_customer (c_w_id, c_d_id, c_id);\r\nalter table bmsql_order_line add constraint ol_order_fkey\r\nforeign key (ol_w_id, ol_d_id, ol_o_id)\r\nreferences bmsql_oorder (o_w_id, o_d_id, o_id);\r\nalter table bmsql_order_line add constraint ol_stock_fkey\r\nforeign key (ol_supply_w_id, ol_i_id)\r\nreferences bmsql_stock (s_w_id, s_i_id);\r\nalter table bmsql_stock add constraint s_warehouse_fkey\r\nforeign key (s_w_id)\r\nreferences bmsql_warehouse (w_id);\r\nalter table bmsql_stock add constraint s_item_fkey\r\nforeign key (s_i_id)\r\nreferences bmsql_item (i_id);\r\n# ------------------------------------------------------------\r\n# Loading SQL file .\/sql.postgres\/extraHistID.sql\r\n# ------------------------------------------------------------\r\n-- ----\r\n-- Extra Schema objects\/definitions for history.hist_id in PostgreSQL\r\n-- ----\r\n-- ----\r\n--      This is an extra column not present in the TPC-C\r\n--      specs. It is useful for replication systems like\r\n--      Bucardo and Slony-I, which like to have a primary\r\n--      key on a table. It is an auto-increment or serial\r\n--      column type. The definition below is compatible\r\n--      with Oracle 11g, using a sequence and a trigger.\r\n-- ----\r\n-- Adjust the sequence above the current max(hist_id)\r\nselect setval('bmsql_hist_id_seq', (select max(hist_id) from bmsql_history));\r\n-- Make nextval(seq) the default value of the hist_id column.\r\nalter table bmsql_history\r\nalter column hist_id set default nextval('bmsql_hist_id_seq');\r\n-- Add a primary key history(hist_id)\r\nalter table bmsql_history add primary key (hist_id);\r\n# ------------------------------------------------------------\r\n# Loading SQL file .\/sql.postgres\/buildFinish.sql\r\n# ------------------------------------------------------------\r\n-- ----\r\n-- Extra commands to run after the tables are created, loaded,\r\n-- indexes built and extra's created.\r\n-- PostgreSQL version.\r\n-- ----\r\nvacuum analyze;\r\n<\/pre>\n<h2>Lancement du bench<\/h2>\n<p>Et voil\u00e0, tout est pret pour lancer le premier Bench !!<br \/>\nMais avant tout, pour pouvoir capturer les m\u00e9triques SGBD il faut au pr\u00e9alable installer la librairie python jaydbapi :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run$ pip install jaydebeapi\r\nDefaulting to user installation because normal site-packages is not writeable\r\nCollecting jaydebeapi\r\n  Downloading JayDeBeApi-1.2.3-py3-none-any.whl (26 kB)\r\nCollecting JPype1\r\n  Downloading JPype1-1.2.0-cp38-cp38-manylinux2010_x86_64.whl (453 kB)\r\n     |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 453 kB 1.4 MB\/s\r\nInstalling collected packages: JPype1, jaydebeapi\r\nSuccessfully installed JPype1-1.2.0 jaydebeapi-1.2.3\r\n<\/pre>\n<p>Pour ma part j&#8217;ai choisi un bench de 20 minutes sans limiter les transactions maximales :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run$ .\/runBenchmark.sh my_props.pg\r\n11:04:36,433 [main] INFO   jTPCC : Term-00,\r\n11:04:36,437 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+\r\n11:04:36,437 [main] INFO   jTPCC : Term-00,      BenchmarkSQL v5.0\r\n11:04:36,438 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+\r\n11:04:36,438 [main] INFO   jTPCC : Term-00,  (c) 2003, Raul Barbosa\r\n11:04:36,439 [main] INFO   jTPCC : Term-00,  (c) 2004-2016, Denis Lussier\r\n11:04:36,442 [main] INFO   jTPCC : Term-00,  (c) 2016, Jan Wieck\r\n11:04:36,443 [main] INFO   jTPCC : Term-00,  (c) 2020, Nicolas Martin\r\n11:04:36,443 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+\r\n11:04:36,444 [main] INFO   jTPCC : Term-00,\r\n11:04:36,470 [main] INFO   jTPCC : Term-00, db=postgres\r\n11:04:36,471 [main] INFO   jTPCC : Term-00, driver=org.postgresql.Driver\r\n11:04:36,471 [main] INFO   jTPCC : Term-00, conn=jdbc:postgresql:\/\/192.168.56.2:5432\/benchmark\r\n11:04:36,472 [main] INFO   jTPCC : Term-00, user=tpcc\r\n11:04:36,481 [main] INFO   jTPCC : Term-00, true\r\n11:04:36,481 [main] INFO   jTPCC : Term-00,\r\n11:04:36,484 [main] INFO   jTPCC : Term-00, warehouses=200\r\n11:04:36,484 [main] INFO   jTPCC : Term-00, terminals=100\r\n11:04:36,485 [main] INFO   jTPCC : Term-00, runMins=20\r\n11:04:36,485 [main] INFO   jTPCC : Term-00, limitTxnsPerMin=0\r\n11:04:36,486 [main] INFO   jTPCC : Term-00, terminalWarehouseFixed=true\r\n11:04:36,486 [main] INFO   jTPCC : Term-00,\r\n11:04:36,486 [main] INFO   jTPCC : Term-00, newOrderWeight=45\r\n11:04:36,487 [main] INFO   jTPCC : Term-00, paymentWeight=43\r\n11:04:36,487 [main] INFO   jTPCC : Term-00, orderStatusWeight=4\r\n11:04:36,487 [main] INFO   jTPCC : Term-00, deliveryWeight=4\r\n11:04:36,487 [main] INFO   jTPCC : Term-00, stockLevelWeight=4\r\n11:04:36,488 [main] INFO   jTPCC : Term-00,\r\n11:04:36,488 [main] INFO   jTPCC : Term-00, resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS\r\n11:04:36,488 [main] INFO   jTPCC : Term-00, osCollectorScript=null\r\n11:04:36,489 [main] INFO   jTPCC : Term-00, dbCollectorScript=.\/misc\/db_collector_pg.py\r\n11:04:36,489 [main] INFO   jTPCC : Term-00,\r\n11:04:36,528 [main] INFO   jTPCC : Term-00, copied my_props.pg to my_result_2020-12-31_110436\/run.properties      Term-00, Running Average tpmTOTAL: 745.88    Curren11:24:48,766 [Thread-73] INFO   jTPCC : Term-00,                                                                                                                     11:24:48,804 [Thread-73] INFO   jTPCC : Term-00,                                                                                                                     11:24:48,820 [Thread-73] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 336.28                                                                                  11:24:48,829 [Thread-73] INFO   jTPCC : Term-00, Measured tpmTOTAL = 746.04                                                                                          11:24:48,832 [Thread-73] INFO   jTPCC : Term-00, Session Start     = 2020-12-31 11:04:42                                                                             11:24:48,834 [Thread-73] INFO   jTPCC : Term-00, Session End       = 2020-12-31 11:24:48                                                                             11:24:48,848 [Thread-73] INFO   jTPCC : Term-00, Transaction Count = 14994                                                                                           postgres@osboxes:~\/benchmarksql\/run$ Exception ignored in: &lt;_io.TextIOWrapper name='&lt;stdout&gt;' mode='w' encoding='utf-8'&gt;\r\nBrokenPipeError: [Errno 32] Broken pipe\r\n<\/pre>\n<p>A l&#8217;issue du bench on peut retrouver les r\u00e9sultats dans le r\u00e9pertoire d\u00e9crit dans le fichier de configuration, rappel\u00e9 aussi lors du lancement du bench :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\n11:04:36,488 [main] INFO   jTPCC : Term-00, resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS\r\n<\/pre>\n<p>Le dossier contient ce qui suit :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run$ ls -ltr my_result_2020-12-31_110436\r\ntotal 8\r\n-rw-rw-r-- 1 postgres postgres 1110 Dec 31 11:04 run.properties\r\ndrwxrwxr-x 2 postgres postgres 4096 Dec 31 11:04 data\r\n<\/pre>\n<p>Le fichier run.properties qui est une copie du fichier de configuration tel qu&#8217;il \u00e9tait lors du lancement du bench et le dossier data contenant les r\u00e9sultats du bench :<\/p>\n<ul>\n<li>les m\u00e9triques du bench dans <strong>result.csv<\/strong><\/li>\n<li>les m\u00e9triques sgbd dans <strong>db_info.csv<\/strong><\/li>\n<li>les d\u00e9tails du run dans <strong>runInfos.csv<\/strong><\/li>\n<\/ul>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run\/my_result_2020-12-31_110436$ ls -ltr data\r\ntotal 976\r\n-rw-rw-r-- 1 postgres postgres    220 Dec 31 11:04 runInfo.csv\r\n-rw-rw-r-- 1 postgres postgres 539235 Dec 31 11:24 result.csv\r\n-rw-rw-r-- 1 postgres postgres 444120 Dec 31 11:24 db_info.csv\r\n<\/pre>\n<h2>Analyse des r\u00e9sultats<\/h2>\n<p>Pour ce faire, il est n\u00e9cessaire de disposer de R avec les packages suivants :<\/p>\n<ul>\n<li>jsonlite (n\u00e9cessaire uniquement en cas d&#8217;utilisation des scripts clouders)<\/li>\n<li>tidyverse<\/li>\n<li>lubridate<\/li>\n<li>ggplot2<\/li>\n<li>hrbrthemes<\/li>\n<li>viridis<\/li>\n<li>htmlwidgets<\/li>\n<\/ul>\n<p>Pour g\u00e9n\u00e9rer le rapport il faut lancer :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run$ .\/generateReport.sh my_result_2020-12-31_110436\r\nGenerating my_result_2020-12-31_110436\/p_db.png ... OK\r\nGenerating my_result_2020-12-31_110436\/tpm_nopm.png ... OK\r\nGenerating my_result_2020-12-31_110436\/latency.png ... OK\r\nGenerating my_result_2020-12-31_110436\/cpu_utilization.png ... Error in file(file, &quot;rt&quot;) : cannot open the connection\r\nCalls: read.csv -&gt; read.table -&gt; file\r\nIn addition: Warning message:\r\nIn file(file, &quot;rt&quot;) :\r\n  cannot open file 'data\/sys_info.csv': No such file or directory\r\nExecution halted\r\nERROR\r\n\r\nR version 3.6.3 (2020-02-29) -- &quot;Holding the Windsock&quot;\r\nCopyright (C) 2020 The R Foundation for Statistical Computing\r\nPlatform: x86_64-pc-linux-gnu (64-bit)\r\n\r\nR is free software and comes with ABSOLUTELY NO WARRANTY.\r\nYou are welcome to redistribute it under certain conditions.\r\nType 'license()' or 'licence()' for distribution details.\r\n\r\n  Natural language support but running in an English locale\r\n\r\nR is a collaborative project with many contributors.\r\nType 'contributors()' for more information and\r\n'citation()' on how to cite R or R packages in publications.\r\n\r\nType 'demo()' for some demos, 'help()' for on-line help, or\r\n'help.start()' for an HTML browser interface to help.\r\nType 'q()' to quit R.\r\n\r\n&gt; # ----\r\n&gt; # R graph to show CPU utilization\r\n&gt; # ----\r\n&gt;\r\n&gt; # ----\r\n&gt; # Read the runInfo.csv file.\r\n&gt; # ----\r\n&gt; runInfo &lt;- read.csv(&quot;data\/runInfo.csv&quot;, head=TRUE)\r\n&gt;\r\n&gt; # ----\r\n&gt; # Determine the grouping interval in seconds based on the\r\n&gt; # run duration.\r\n&gt; # ----\r\n&gt; xmax &lt;- runInfo$runMins\r\n&gt; for (interval in c(1, 2, 5, 10, 20, 60, 120, 300, 600)) {\r\n+     if ((xmax * 60) \/ interval &lt;= 1000) {\r\n+         break\r\n+     }\r\n+ }\r\n&gt; idiv &lt;- interval * 1000.0\r\n&gt;\r\n&gt; # ----\r\n&gt; # Read the recorded CPU data and aggregate it for the desired interval.\r\n&gt; # ----\r\n&gt; rawData &lt;- read.csv(&quot;data\/sys_info.csv&quot;, head=TRUE)\r\nGenerating my_result_2020-12-31_110436\/report.html ... OK\r\n<\/pre>\n<p>Les derni\u00e8res erreurs peuvent \u00eatre ignor\u00e9es du fait que les m\u00e9triques OS n&#8217;ont pas \u00e9t\u00e9 captur\u00e9es.<br \/>\nLe r\u00e9sultat est g\u00e9n\u00e9r\u00e9 dans le dossier racine :<\/p>\n<pre class=\"brush: bash; title: ; notranslate\" title=\"\">\r\npostgres@osboxes:~\/benchmarksql\/run$ ls -ltr my_result_2020-12-31_110436\r\ntotal 328\r\n-rw-rw-r-- 1 postgres postgres   1110 Dec 31 11:04 run.properties\r\ndrwxrwxr-x 2 postgres postgres   4096 Dec 31 16:21 data\r\n-rw-rw-r-- 1 postgres postgres  18711 Dec 31 17:25 p_db.png\r\n-rw-rw-r-- 1 postgres postgres 128315 Dec 31 17:25 tpm_nopm.png\r\n-rw-rw-r-- 1 postgres postgres 165201 Dec 31 17:25 latency.png\r\n-rw-rw-r-- 1 postgres postgres   7125 Dec 31 17:25 report.html\r\n<\/pre>\n<a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-twitter nolightbox\" data-provider=\"twitter\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Twitter\" href=\"https:\/\/twitter.com\/intent\/tweet?url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8438&#038;text=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"twitter\" title=\"Share on Twitter\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/twitter.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-linkedin nolightbox\" data-provider=\"linkedin\" target=\"_blank\" rel=\"nofollow\" title=\"Share on Linkedin\" href=\"https:\/\/www.linkedin.com\/shareArticle?mini=true&#038;url=https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8438&#038;title=PostgreSQL%20Benchmarking\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px;margin-right:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"linkedin\" title=\"Share on Linkedin\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/linkedin.png\" \/><\/a><a class=\"synved-social-button synved-social-button-share synved-social-size-24 synved-social-resolution-single synved-social-provider-mail nolightbox\" data-provider=\"mail\" rel=\"nofollow\" title=\"Share by email\" href=\"mailto:?subject=PostgreSQL%20Benchmarking&#038;body=Article%20sur%20le%20blog%20de%20la%20Capdata%20Tech%20Team%20%3A%20:%20https%3A%2F%2Fblog.capdata.fr%2Findex.php%2Fwp-json%2Fwp%2Fv2%2Fposts%2F8438\" style=\"font-size: 0px;width:24px;height:24px;margin:0;margin-bottom:5px\"><img loading=\"lazy\" decoding=\"async\" alt=\"mail\" title=\"Share by email\" class=\"synved-share-image synved-social-image synved-social-image-share\" width=\"24\" height=\"24\" style=\"display: inline;width:24px;height:24px;margin: 0;padding: 0;border: none;box-shadow: none\" src=\"https:\/\/blog.capdata.fr\/wp-content\/plugins\/social-media-feather\/synved-social\/image\/social\/regular\/48x48\/mail.png\" \/><\/a>","protected":false},"excerpt":{"rendered":"<p>Benchmarking on PostgreSQL Cet article fait echo \u00e0 la s\u00e9rie d&#8217;articles sur les benchmarks r\u00e9alis\u00e9s sur les PaaS PostgreSQL de diff\u00e9rents cloud provider. Je vais donc vous pr\u00e9senter ici une m\u00e9thode pour r\u00e9aliser assez simplement un benchmark de type TPC-C&hellip; <a href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\" class=\"more-link\">Continuer la lecture <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":8448,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[295,282,1,266],"tags":[],"class_list":["post-8438","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-aws","category-azure","category-non-classe","category-postgresql"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v20.8 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>PostgreSQL Benchmarking - Capdata TECH BLOG<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"PostgreSQL Benchmarking - Capdata TECH BLOG\" \/>\n<meta property=\"og:description\" content=\"Benchmarking on PostgreSQL Cet article fait echo \u00e0 la s\u00e9rie d&#8217;articles sur les benchmarks r\u00e9alis\u00e9s sur les PaaS PostgreSQL de diff\u00e9rents cloud provider. Je vais donc vous pr\u00e9senter ici une m\u00e9thode pour r\u00e9aliser assez simplement un benchmark de type TPC-C&hellip; Continuer la lecture &rarr;\" \/>\n<meta property=\"og:url\" content=\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\" \/>\n<meta property=\"og:site_name\" content=\"Capdata TECH BLOG\" \/>\n<meta property=\"article:published_time\" content=\"2021-01-15T14:01:30+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/12\/benchmarking-concept-banniere-web-idee-entreprise_277904-2528.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"626\" \/>\n\t<meta property=\"og:image:height\" content=\"352\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Capdata team\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Capdata team\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"15 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\"},\"author\":{\"name\":\"Capdata team\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/bfd9395c8ba4fa125792a543377035e9\"},\"headline\":\"PostgreSQL Benchmarking\",\"datePublished\":\"2021-01-15T14:01:30+00:00\",\"dateModified\":\"2021-01-15T14:01:30+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\"},\"wordCount\":3035,\"commentCount\":0,\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"articleSection\":{\"0\":\"AWS\",\"1\":\"Azure\",\"3\":\"PostgreSQL\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\",\"url\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\",\"name\":\"PostgreSQL Benchmarking - Capdata TECH BLOG\",\"isPartOf\":{\"@id\":\"https:\/\/blog.capdata.fr\/#website\"},\"datePublished\":\"2021-01-15T14:01:30+00:00\",\"dateModified\":\"2021-01-15T14:01:30+00:00\",\"breadcrumb\":{\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/blog.capdata.fr\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"PostgreSQL Benchmarking\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/blog.capdata.fr\/#website\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"name\":\"Capdata TECH BLOG\",\"description\":\"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting\",\"publisher\":{\"@id\":\"https:\/\/blog.capdata.fr\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/blog.capdata.fr\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/blog.capdata.fr\/#organization\",\"name\":\"Capdata TECH BLOG\",\"url\":\"https:\/\/blog.capdata.fr\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"contentUrl\":\"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp\",\"width\":800,\"height\":254,\"caption\":\"Capdata TECH BLOG\"},\"image\":{\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/blog.capdata.fr\/#\/schema\/person\/bfd9395c8ba4fa125792a543377035e9\",\"name\":\"Capdata team\",\"sameAs\":[\"https:\/\/www.capdata.fr\"],\"url\":\"https:\/\/blog.capdata.fr\/index.php\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"PostgreSQL Benchmarking - Capdata TECH BLOG","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/","og_locale":"fr_FR","og_type":"article","og_title":"PostgreSQL Benchmarking - Capdata TECH BLOG","og_description":"Benchmarking on PostgreSQL Cet article fait echo \u00e0 la s\u00e9rie d&#8217;articles sur les benchmarks r\u00e9alis\u00e9s sur les PaaS PostgreSQL de diff\u00e9rents cloud provider. Je vais donc vous pr\u00e9senter ici une m\u00e9thode pour r\u00e9aliser assez simplement un benchmark de type TPC-C&hellip; Continuer la lecture &rarr;","og_url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/","og_site_name":"Capdata TECH BLOG","article_published_time":"2021-01-15T14:01:30+00:00","og_image":[{"width":626,"height":352,"url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2020\/12\/benchmarking-concept-banniere-web-idee-entreprise_277904-2528.jpg","type":"image\/jpeg"}],"author":"Capdata team","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Capdata team","Dur\u00e9e de lecture estim\u00e9e":"15 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#article","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/"},"author":{"name":"Capdata team","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/bfd9395c8ba4fa125792a543377035e9"},"headline":"PostgreSQL Benchmarking","datePublished":"2021-01-15T14:01:30+00:00","dateModified":"2021-01-15T14:01:30+00:00","mainEntityOfPage":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/"},"wordCount":3035,"commentCount":0,"publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"articleSection":{"0":"AWS","1":"Azure","3":"PostgreSQL"},"inLanguage":"fr-FR","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/","url":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/","name":"PostgreSQL Benchmarking - Capdata TECH BLOG","isPartOf":{"@id":"https:\/\/blog.capdata.fr\/#website"},"datePublished":"2021-01-15T14:01:30+00:00","dateModified":"2021-01-15T14:01:30+00:00","breadcrumb":{"@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/blog.capdata.fr\/index.php\/postgresql-benchmarking\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/blog.capdata.fr\/"},{"@type":"ListItem","position":2,"name":"PostgreSQL Benchmarking"}]},{"@type":"WebSite","@id":"https:\/\/blog.capdata.fr\/#website","url":"https:\/\/blog.capdata.fr\/","name":"Capdata TECH BLOG","description":"Le blog technique sur les bases de donn\u00e9es de CAP DATA Consulting","publisher":{"@id":"https:\/\/blog.capdata.fr\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/blog.capdata.fr\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/blog.capdata.fr\/#organization","name":"Capdata TECH BLOG","url":"https:\/\/blog.capdata.fr\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/","url":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","contentUrl":"https:\/\/blog.capdata.fr\/wp-content\/uploads\/2023\/01\/logo_capdata.webp","width":800,"height":254,"caption":"Capdata TECH BLOG"},"image":{"@id":"https:\/\/blog.capdata.fr\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/cap-data-consulting\/mycompany\/"]},{"@type":"Person","@id":"https:\/\/blog.capdata.fr\/#\/schema\/person\/bfd9395c8ba4fa125792a543377035e9","name":"Capdata team","sameAs":["https:\/\/www.capdata.fr"],"url":"https:\/\/blog.capdata.fr\/index.php\/author\/admin\/"}]}},"_links":{"self":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8438","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/comments?post=8438"}],"version-history":[{"count":14,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8438\/revisions"}],"predecessor-version":[{"id":8462,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/posts\/8438\/revisions\/8462"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media\/8448"}],"wp:attachment":[{"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/media?parent=8438"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/categories?post=8438"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.capdata.fr\/index.php\/wp-json\/wp\/v2\/tags?post=8438"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}