0

PostgreSQL Benchmarking

twitterlinkedinmail

Benchmarking on PostgreSQL

Cet article fait echo à la série d’articles sur les benchmarks réalisés sur les PaaS PostgreSQL de différents cloud provider.

Je vais donc vous présenter ici une méthode pour réaliser assez simplement un benchmark de type TPC-C sur une BDD PostgreSQL (facilement adaptable à d’autres SGBD vu que la méthode de connexion utilisée ici est basée sur jdbc).

(tout ceci est décrit dans le wiki : https://github.com/Capdata/benchmarksql/wiki et dans le README)

Préparation de l’environnement

(Est exclu ici l’installation de PostgreSQL)

Récupération des sources

postgres@capdata:~$ git clone https://github.com/Capdata/benchmarksql.git
Cloning into 'benchmarksql'...
remote: Enumerating objects: 42, done.
remote: Counting objects: 100% (42/42), done.
remote: Compressing objects: 100% (35/35), done.
remote: Total 524 (delta 9), reused 30 (delta 5), pack-reused 482
Receiving objects: 100% (524/524), 6.14 MiB | 1.23 MiB/s, done.
Resolving deltas: 100% (230/230), done.

Partant de là soit on compile directement depuis les sources, soit on récupère les sources déjà compilées.

Compilation

postgres@capdata:~/benchmarksql$ ant
Buildfile: /var/lib/postgresql/benchmarksql/build.xml

init:
[mkdir] Created dir: /var/lib/postgresql/benchmarksql/build

compile:
[javac] Compiling 13 source files to /var/lib/postgresql/benchmarksql/build

dist:
[mkdir] Created dir: /var/lib/postgresql/benchmarksql/dist
[jar] Building jar: /var/lib/postgresql/benchmarksql/dist/BenchmarkSQL-5.0.jar

BUILD SUCCESSFUL
Total time: 4 seconds

Récupération du package pré-compilé

postgres@capdata:~$ cd benchmarksql/run/
postgres@capdata:~/benchmarksql/run$ wget https://github.com/Capdata/benchmarksql/releases/download/v5.1/BenchmarkSQL-5.1.jar
--2020-12-18 15:43:12-- https://github.com/Capdata/benchmarksql/releases/download/v5.1/BenchmarkSQL-5.1.jar
Resolving github.com (github.com)... 140.82.121.3
Connecting to github.com (github.com)|140.82.121.3|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/311352109/cee1a380-4099-11eb-9e6d-b7c7d2a45a5e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201218T144314Z&X-Amz-Expires=300&X-Amz-Signature=76b47533e7e988c887c13e7044fa2ccfede086ccad7a9acc2c3d721f3c65b7f5&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=311352109&response-content-disposition=attachment%3B%20filename%3DBenchmarkSQL-5.1.jar&response-content-type=application%2Foctet-stream [following]
--2020-12-18 15:43:12-- https://github-production-release-asset-2e65be.s3.amazonaws.com/311352109/cee1a380-4099-11eb-9e6d-b7c7d2a45a5e?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20201218%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20201218T144314Z&X-Amz-Expires=300&X-Amz-Signature=76b47533e7e988c887c13e7044fa2ccfede086ccad7a9acc2c3d721f3c65b7f5&X-Amz-SignedHeaders=host&actor_id=0&key_id=0&repo_id=311352109&response-content-disposition=attachment%3B%20filename%3DBenchmarkSQL-5.1.jar&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.217.111.196
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.217.111.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 67398 (66K) [application/octet-stream]
Saving to: ‘BenchmarkSQL-5.1.jar’

BenchmarkSQL-5.1.jar 100%[==================================================================================>] 65.82K 292KB/s in 0.2s

2020-12-18 15:43:13 (292 KB/s) - ‘BenchmarkSQL-5.1.jar’ saved [67398/67398]

Préparation du SGBD

On commence donc par vérifier que tout fonctionne correctement :

postgres@capdata:~$ ps -ef | grep postgres
postgres 918 1 0 12:15 ? 00:00:00 /usr/lib/postgresql/12/bin/postgres -D /var/lib/postgresql/12/main -c config_file=/etc/postgresql/12/main/postgresql.conf
postgres 996 918 0 12:15 ? 00:00:00 postgres: 12/main: checkpointer
postgres 997 918 0 12:15 ? 00:00:00 postgres: 12/main: background writer
postgres 998 918 0 12:15 ? 00:00:00 postgres: 12/main: walwriter
postgres 999 918 0 12:15 ? 00:00:00 postgres: 12/main: autovacuum launcher
postgres 1000 918 0 12:15 ? 00:00:00 postgres: 12/main: stats collector
postgres 1001 918 0 12:15 ? 00:00:00 postgres: 12/main: logical replication launcher
root 37426 2242 0 15:06 pts/0 00:00:00 su - postgres
postgres 37427 37426 0 15:06 pts/0 00:00:00 -bash
postgres 37444 37427 0 15:06 pts/0 00:00:00 ps -ef
postgres 37445 37427 0 15:06 pts/0 00:00:00 grep postgres

On se connecte ensuite au SGBD et on crée une BDD et un user pour les besoins du Bench :

postgres@capdata:~/benchmarksql/run$ psql
psql (12.5 (Ubuntu 12.5-0ubuntu0.20.04.1))
Type "help" for help.

postgres=# create user tpcc password 'mysecuredpassword';
CREATE ROLE
postgres=# create database benchmark owner tpcc;
CREATE DATABASE
postgres=#

Paramétrage du bench

Il faut alors modifier le fichier de configuration pour “coller” à sa propre configuration, on commence par copier l’exemple :

postgres@osboxes:~$ cd benchmarksql/run/
postgres@osboxes:~/benchmarksql/run$ ls
BenchmarkSQL-5.1.jar generateReport.sh props.fb props.pg runDatabaseDestroy.sh sql.common sql.oracle
funcs.sh log4j.properties props.mysql runBenchmark.sh runLoader.sh sql.firebird sql.postgres
generateGraphs.sh misc props.ora runDatabaseBuild.sh runSQL.sh sql.mysql
postgres@osboxes:~/benchmarksql/run$ cp props.pg my_props.pg

Puis on l’édite pour changer les informations importantes, tout d’abord les éléments de connexion à la BDD, exemple :

db=postgres                                        # Type de SGBD
driver=org.postgresql.Driver                       # Driver jdbc utilisé
conn=jdbc:postgresql://192.168.56.2:5432/benchmark    # L'url jdbc de connexion
SSL=true                                           # Si SSL est utilisé ou pas
user=tpcc                                          # Username
password=mysecuredpassword                         # Password

Puis on configure le “profil de charge” :

warehouses=200     # Nombre d'entrepots, va déterminer la taille de la BDD finale, compter 1Go / entrepot
loadWorkers=4      # Nombre de process en // pour charger la BDD (ne sert que pour le chargement)

terminals=100      # Nombre de "clients" simultanés 

runTxnsPerTerminal=0  # On choisi alors soit un nombre de Transaction / Client (terminal)
runMins=20            # Ou une durée du benchmark (seul un des 2 paramètre doit avoir une valeur > 0)
limitTxnsPerMin=0     # On peut également limiter le nombre de transaction / minute

Chargement des données

On peut alors lancer la création des structures et le chargement des données :

postgres@osboxes:~/benchmarksql/run$ ./runDatabaseBuild.sh my_props.pg
# ------------------------------------------------------------
# Loading SQL file ./sql.common/tableCreates.sql
# ------------------------------------------------------------
create table bmsql_config (
cfg_name    varchar(30) primary key,
cfg_value   varchar(50)
);
create table bmsql_warehouse (
w_id        integer   not null,
w_ytd       decimal(12,2),
w_tax       decimal(4,4),
w_name      varchar(10),
w_street_1  varchar(20),
w_street_2  varchar(20),
w_city      varchar(20),
w_state     char(2),
w_zip       char(9)
);
create table bmsql_district (
d_w_id       integer       not null,
d_id         integer       not null,
d_ytd        decimal(12,2),
d_tax        decimal(4,4),
d_next_o_id  integer,
d_name       varchar(10),
d_street_1   varchar(20),
d_street_2   varchar(20),
d_city       varchar(20),
d_state      char(2),
d_zip        char(9)
);
create table bmsql_customer (
c_w_id         integer        not null,
c_d_id         integer        not null,
c_id           integer        not null,
c_discount     decimal(4,4),
c_credit       char(2),
c_last         varchar(16),
c_first        varchar(16),
c_credit_lim   decimal(12,2),
c_balance      decimal(12,2),
c_ytd_payment  decimal(12,2),
c_payment_cnt  integer,
c_delivery_cnt integer,
c_street_1     varchar(20),
c_street_2     varchar(20),
c_city         varchar(20),
c_state        char(2),
c_zip          char(9),
c_phone        char(16),
c_since        timestamp,
c_middle       char(2),
c_data         varchar(500)
);
create sequence bmsql_hist_id_seq;
create table bmsql_history (
hist_id  integer,
h_c_id   integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id   integer,
h_w_id   integer,
h_date   timestamp,
h_amount decimal(6,2),
h_data   varchar(24)
);
create table bmsql_new_order (
no_w_id  integer   not null,
no_d_id  integer   not null,
no_o_id  integer   not null
);
create table bmsql_oorder (
o_w_id       integer      not null,
o_d_id       integer      not null,
o_id         integer      not null,
o_c_id       integer,
o_carrier_id integer,
o_ol_cnt     integer,
o_all_local  integer,
o_entry_d    timestamp
);
create table bmsql_order_line (
ol_w_id         integer   not null,
ol_d_id         integer   not null,
ol_o_id         integer   not null,
ol_number       integer   not null,
ol_i_id         integer   not null,
ol_delivery_d   timestamp,
ol_amount       decimal(6,2),
ol_supply_w_id  integer,
ol_quantity     integer,
ol_dist_info    char(24)
);
create table bmsql_item (
i_id     integer      not null,
i_name   varchar(24),
i_price  decimal(5,2),
i_data   varchar(50),
i_im_id  integer
);
create table bmsql_stock (
s_w_id       integer       not null,
s_i_id       integer       not null,
s_quantity   integer,
s_ytd        integer,
s_order_cnt  integer,
s_remote_cnt integer,
s_data       varchar(50),
s_dist_01    char(24),
s_dist_02    char(24),
s_dist_03    char(24),
s_dist_04    char(24),
s_dist_05    char(24),
s_dist_06    char(24),
s_dist_07    char(24),
s_dist_08    char(24),
s_dist_09    char(24),
s_dist_10    char(24)
);
Starting BenchmarkSQL LoadData

driver=org.postgresql.Driver
conn=jdbc:postgresql://192.168.56.2:5432/benchmark
user=tpcc
password=***********
warehouses=200
loadWorkers=4
fileLocation (not defined)
csvNullValue (not defined - using default 'NULL')

Worker 000: Loading ITEM
Worker 001: Loading Warehouse      1
Worker 003: Loading Warehouse      2
Worker 002: Loading Warehouse      3
..........
..........
Worker 000: Loading Warehouse    199 done
Worker 003: Loading Warehouse    200 done
# ------------------------------------------------------------
# Loading SQL file ./sql.common/indexCreates.sql
# ------------------------------------------------------------
alter table bmsql_warehouse add constraint bmsql_warehouse_pkey
primary key (w_id);
alter table bmsql_district add constraint bmsql_district_pkey
primary key (d_w_id, d_id);
alter table bmsql_customer add constraint bmsql_customer_pkey
primary key (c_w_id, c_d_id, c_id);
create index bmsql_customer_idx1
on  bmsql_customer (c_w_id, c_d_id, c_last, c_first);
alter table bmsql_oorder add constraint bmsql_oorder_pkey
primary key (o_w_id, o_d_id, o_id);
create unique index bmsql_oorder_idx1
on  bmsql_oorder (o_w_id, o_d_id, o_carrier_id, o_id);
alter table bmsql_new_order add constraint bmsql_new_order_pkey
primary key (no_w_id, no_d_id, no_o_id);
alter table bmsql_order_line add constraint bmsql_order_line_pkey
primary key (ol_w_id, ol_d_id, ol_o_id, ol_number);
alter table bmsql_stock add constraint bmsql_stock_pkey
primary key (s_w_id, s_i_id);
alter table bmsql_item add constraint bmsql_item_pkey
primary key (i_id);
# ------------------------------------------------------------
# Loading SQL file ./sql.common/foreignKeys.sql
# ------------------------------------------------------------
alter table bmsql_district add constraint d_warehouse_fkey
foreign key (d_w_id)
references bmsql_warehouse (w_id);
alter table bmsql_customer add constraint c_district_fkey
foreign key (c_w_id, c_d_id)
references bmsql_district (d_w_id, d_id);
alter table bmsql_history add constraint h_customer_fkey
foreign key (h_c_w_id, h_c_d_id, h_c_id)
references bmsql_customer (c_w_id, c_d_id, c_id);
alter table bmsql_history add constraint h_district_fkey
foreign key (h_w_id, h_d_id)
references bmsql_district (d_w_id, d_id);
alter table bmsql_new_order add constraint no_order_fkey
foreign key (no_w_id, no_d_id, no_o_id)
references bmsql_oorder (o_w_id, o_d_id, o_id);
alter table bmsql_oorder add constraint o_customer_fkey
foreign key (o_w_id, o_d_id, o_c_id)
references bmsql_customer (c_w_id, c_d_id, c_id);
alter table bmsql_order_line add constraint ol_order_fkey
foreign key (ol_w_id, ol_d_id, ol_o_id)
references bmsql_oorder (o_w_id, o_d_id, o_id);
alter table bmsql_order_line add constraint ol_stock_fkey
foreign key (ol_supply_w_id, ol_i_id)
references bmsql_stock (s_w_id, s_i_id);
alter table bmsql_stock add constraint s_warehouse_fkey
foreign key (s_w_id)
references bmsql_warehouse (w_id);
alter table bmsql_stock add constraint s_item_fkey
foreign key (s_i_id)
references bmsql_item (i_id);
# ------------------------------------------------------------
# Loading SQL file ./sql.postgres/extraHistID.sql
# ------------------------------------------------------------
-- ----
-- Extra Schema objects/definitions for history.hist_id in PostgreSQL
-- ----
-- ----
--      This is an extra column not present in the TPC-C
--      specs. It is useful for replication systems like
--      Bucardo and Slony-I, which like to have a primary
--      key on a table. It is an auto-increment or serial
--      column type. The definition below is compatible
--      with Oracle 11g, using a sequence and a trigger.
-- ----
-- Adjust the sequence above the current max(hist_id)
select setval('bmsql_hist_id_seq', (select max(hist_id) from bmsql_history));
-- Make nextval(seq) the default value of the hist_id column.
alter table bmsql_history
alter column hist_id set default nextval('bmsql_hist_id_seq');
-- Add a primary key history(hist_id)
alter table bmsql_history add primary key (hist_id);
# ------------------------------------------------------------
# Loading SQL file ./sql.postgres/buildFinish.sql
# ------------------------------------------------------------
-- ----
-- Extra commands to run after the tables are created, loaded,
-- indexes built and extra's created.
-- PostgreSQL version.
-- ----
vacuum analyze;

Lancement du bench

Et voilà, tout est pret pour lancer le premier Bench !!
Mais avant tout, pour pouvoir capturer les métriques SGBD il faut au préalable installer la librairie python jaydbapi :

postgres@osboxes:~/benchmarksql/run$ pip install jaydebeapi
Defaulting to user installation because normal site-packages is not writeable
Collecting jaydebeapi
  Downloading JayDeBeApi-1.2.3-py3-none-any.whl (26 kB)
Collecting JPype1
  Downloading JPype1-1.2.0-cp38-cp38-manylinux2010_x86_64.whl (453 kB)
     |████████████████████████████████| 453 kB 1.4 MB/s
Installing collected packages: JPype1, jaydebeapi
Successfully installed JPype1-1.2.0 jaydebeapi-1.2.3

Pour ma part j’ai choisi un bench de 20 minutes sans limiter les transactions maximales :

postgres@osboxes:~/benchmarksql/run$ ./runBenchmark.sh my_props.pg
11:04:36,433 [main] INFO   jTPCC : Term-00,
11:04:36,437 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
11:04:36,437 [main] INFO   jTPCC : Term-00,      BenchmarkSQL v5.0
11:04:36,438 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
11:04:36,438 [main] INFO   jTPCC : Term-00,  (c) 2003, Raul Barbosa
11:04:36,439 [main] INFO   jTPCC : Term-00,  (c) 2004-2016, Denis Lussier
11:04:36,442 [main] INFO   jTPCC : Term-00,  (c) 2016, Jan Wieck
11:04:36,443 [main] INFO   jTPCC : Term-00,  (c) 2020, Nicolas Martin
11:04:36,443 [main] INFO   jTPCC : Term-00, +-------------------------------------------------------------+
11:04:36,444 [main] INFO   jTPCC : Term-00,
11:04:36,470 [main] INFO   jTPCC : Term-00, db=postgres
11:04:36,471 [main] INFO   jTPCC : Term-00, driver=org.postgresql.Driver
11:04:36,471 [main] INFO   jTPCC : Term-00, conn=jdbc:postgresql://192.168.56.2:5432/benchmark
11:04:36,472 [main] INFO   jTPCC : Term-00, user=tpcc
11:04:36,481 [main] INFO   jTPCC : Term-00, true
11:04:36,481 [main] INFO   jTPCC : Term-00,
11:04:36,484 [main] INFO   jTPCC : Term-00, warehouses=200
11:04:36,484 [main] INFO   jTPCC : Term-00, terminals=100
11:04:36,485 [main] INFO   jTPCC : Term-00, runMins=20
11:04:36,485 [main] INFO   jTPCC : Term-00, limitTxnsPerMin=0
11:04:36,486 [main] INFO   jTPCC : Term-00, terminalWarehouseFixed=true
11:04:36,486 [main] INFO   jTPCC : Term-00,
11:04:36,486 [main] INFO   jTPCC : Term-00, newOrderWeight=45
11:04:36,487 [main] INFO   jTPCC : Term-00, paymentWeight=43
11:04:36,487 [main] INFO   jTPCC : Term-00, orderStatusWeight=4
11:04:36,487 [main] INFO   jTPCC : Term-00, deliveryWeight=4
11:04:36,487 [main] INFO   jTPCC : Term-00, stockLevelWeight=4
11:04:36,488 [main] INFO   jTPCC : Term-00,
11:04:36,488 [main] INFO   jTPCC : Term-00, resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
11:04:36,488 [main] INFO   jTPCC : Term-00, osCollectorScript=null
11:04:36,489 [main] INFO   jTPCC : Term-00, dbCollectorScript=./misc/db_collector_pg.py
11:04:36,489 [main] INFO   jTPCC : Term-00,
11:04:36,528 [main] INFO   jTPCC : Term-00, copied my_props.pg to my_result_2020-12-31_110436/run.properties      Term-00, Running Average tpmTOTAL: 745.88    Curren11:24:48,766 [Thread-73] INFO   jTPCC : Term-00,                                                                                                                     11:24:48,804 [Thread-73] INFO   jTPCC : Term-00,                                                                                                                     11:24:48,820 [Thread-73] INFO   jTPCC : Term-00, Measured tpmC (NewOrders) = 336.28                                                                                  11:24:48,829 [Thread-73] INFO   jTPCC : Term-00, Measured tpmTOTAL = 746.04                                                                                          11:24:48,832 [Thread-73] INFO   jTPCC : Term-00, Session Start     = 2020-12-31 11:04:42                                                                             11:24:48,834 [Thread-73] INFO   jTPCC : Term-00, Session End       = 2020-12-31 11:24:48                                                                             11:24:48,848 [Thread-73] INFO   jTPCC : Term-00, Transaction Count = 14994                                                                                           postgres@osboxes:~/benchmarksql/run$ Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>
BrokenPipeError: [Errno 32] Broken pipe

A l’issue du bench on peut retrouver les résultats dans le répertoire décrit dans le fichier de configuration, rappelé aussi lors du lancement du bench :

11:04:36,488 [main] INFO   jTPCC : Term-00, resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS

Le dossier contient ce qui suit :

postgres@osboxes:~/benchmarksql/run$ ls -ltr my_result_2020-12-31_110436
total 8
-rw-rw-r-- 1 postgres postgres 1110 Dec 31 11:04 run.properties
drwxrwxr-x 2 postgres postgres 4096 Dec 31 11:04 data

Le fichier run.properties qui est une copie du fichier de configuration tel qu’il était lors du lancement du bench et le dossier data contenant les résultats du bench :

  • les métriques du bench dans result.csv
  • les métriques sgbd dans db_info.csv
  • les détails du run dans runInfos.csv
postgres@osboxes:~/benchmarksql/run/my_result_2020-12-31_110436$ ls -ltr data
total 976
-rw-rw-r-- 1 postgres postgres    220 Dec 31 11:04 runInfo.csv
-rw-rw-r-- 1 postgres postgres 539235 Dec 31 11:24 result.csv
-rw-rw-r-- 1 postgres postgres 444120 Dec 31 11:24 db_info.csv

Analyse des résultats

Pour ce faire, il est nécessaire de disposer de R avec les packages suivants :

  • jsonlite (nécessaire uniquement en cas d’utilisation des scripts clouders)
  • tidyverse
  • lubridate
  • ggplot2
  • hrbrthemes
  • viridis
  • htmlwidgets

Pour générer le rapport il faut lancer :

postgres@osboxes:~/benchmarksql/run$ ./generateReport.sh my_result_2020-12-31_110436
Generating my_result_2020-12-31_110436/p_db.png ... OK
Generating my_result_2020-12-31_110436/tpm_nopm.png ... OK
Generating my_result_2020-12-31_110436/latency.png ... OK
Generating my_result_2020-12-31_110436/cpu_utilization.png ... Error in file(file, "rt") : cannot open the connection
Calls: read.csv -> read.table -> file
In addition: Warning message:
In file(file, "rt") :
  cannot open file 'data/sys_info.csv': No such file or directory
Execution halted
ERROR

R version 3.6.3 (2020-02-29) -- "Holding the Windsock"
Copyright (C) 2020 The R Foundation for Statistical Computing
Platform: x86_64-pc-linux-gnu (64-bit)

R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.

  Natural language support but running in an English locale

R is a collaborative project with many contributors.
Type 'contributors()' for more information and
'citation()' on how to cite R or R packages in publications.

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> # ----
> # R graph to show CPU utilization
> # ----
>
> # ----
> # Read the runInfo.csv file.
> # ----
> runInfo <- read.csv("data/runInfo.csv", head=TRUE)
>
> # ----
> # Determine the grouping interval in seconds based on the
> # run duration.
> # ----
> xmax <- runInfo$runMins
> for (interval in c(1, 2, 5, 10, 20, 60, 120, 300, 600)) {
+     if ((xmax * 60) / interval <= 1000) {
+         break
+     }
+ }
> idiv <- interval * 1000.0
>
> # ----
> # Read the recorded CPU data and aggregate it for the desired interval.
> # ----
> rawData <- read.csv("data/sys_info.csv", head=TRUE)
Generating my_result_2020-12-31_110436/report.html ... OK

Les dernières erreurs peuvent être ignorées du fait que les métriques OS n’ont pas été capturées.
Le résultat est généré dans le dossier racine :

postgres@osboxes:~/benchmarksql/run$ ls -ltr my_result_2020-12-31_110436
total 328
-rw-rw-r-- 1 postgres postgres   1110 Dec 31 11:04 run.properties
drwxrwxr-x 2 postgres postgres   4096 Dec 31 16:21 data
-rw-rw-r-- 1 postgres postgres  18711 Dec 31 17:25 p_db.png
-rw-rw-r-- 1 postgres postgres 128315 Dec 31 17:25 tpm_nopm.png
-rw-rw-r-- 1 postgres postgres 165201 Dec 31 17:25 latency.png
-rw-rw-r-- 1 postgres postgres   7125 Dec 31 17:25 report.html

Continuez votre lecture sur le blog :

twitterlinkedinmail

Capdata team

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.