博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
GreenPlum pgbench压测
阅读量:3521 次
发布时间:2019-05-20

本文共 20639 字,大约阅读时间需要 68 分钟。

一、安装pgbench

二、测试TPC-B

一,安装pgbench命令(root用户)1, 进入 greenplum 的目录 contrib2, make all;make install3, yum install gnuplotpgbench]# make allgcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS -I../../src/interfaces/libpq -I. -I. -I../../src/include -D_GNU_SOURCE   -c -o pgbench.o pgbench.cgcc -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -D_POSIX_PTHREAD_SEMANTICS pgbench.o -L../../src/common -lpgcommon -L../../src/port -lpgport -L../../src/interfaces/libpq -lpq -L../../src/port -L../../src/common   -Wl,--as-needed -Wl,-rpath,'/usr/local/pgsql/lib',--enable-new-dtags   -lpgcommon -lpgport -lz -lrt -lcrypt -ldl -lm  -o pgbench pgbench]# ./pgbench --helppgbench is a benchmarking tool for PostgreSQL.Usage:  pgbench [OPTION]... [DBNAME]Initialization options:  -i, --initialize         invokes initialization mode  -F, --fillfactor=NUM     set fill factor  -n, --no-vacuum          do not run VACUUM after initialization  -q, --quiet              quiet logging (one message each 5 seconds)  -s, --scale=NUM          scaling factor  --foreign-keys           create foreign key constraints between tables  --index-tablespace=TABLESPACE                           create indexes in the specified tablespace  --tablespace=TABLESPACE  create tables in the specified tablespace  --unlogged-tables        create tables as unlogged tablesBenchmarking options:  -c, --client=NUM         number of concurrent database clients (default: 1)  -C, --connect            establish new connection for each transaction  -D, --define=VARNAME=VALUE                           define variable for use by custom script  -f, --file=FILENAME      read transaction script from FILENAME  -j, --jobs=NUM           number of threads (default: 1)  -l, --log                write transaction times to log file  -M, --protocol=simple|extended|prepared                           protocol for submitting queries (default: simple)  -n, --no-vacuum          do not run VACUUM before tests  -N, --skip-some-updates  skip updates of pgbench_tellers and pgbench_branches  -P, --progress=NUM       show thread progress report every NUM seconds  -r, --report-latencies   report average latency per command  -R, --rate=NUM           target rate in transactions per second  -s, --scale=NUM          report this scale factor in output  -S, --select-only        perform SELECT-only transactions  -t, --transactions=NUM   number of transactions each client runs (default: 10)  -T, --time=NUM           duration of benchmark test in seconds  -v, --vacuum-all         vacuum all four standard tables before tests  --aggregate-interval=NUM aggregate data over NUM seconds  --sampling-rate=NUM      fraction of transactions to log (e.g. 0.01 for 1%)Common options:  -d, --debug              print debugging output  -h, --host=HOSTNAME      database server host or socket directory  -p, --port=PORT          database server port number  -U, --username=USERNAME  connect as specified database user  -V, --version            output version information, then exit  -?, --help               show this help, then exitReport bugs to 
.二,上传pgbench-tools-master.zip压缩包(gpadmin用户)1,su gpadmin2, cd /home/gpadmin/install3, 上传压缩包到该目录/home/gpadmin/install3, 解压 unzip pgbench-tools-master.zip4,cd /home/gpadmin/install/pgbench-tools-maste[gpadmin@gptest01 pgbench-tools-master]$ psql -d postgres -c 'create database pgbench'CREATE DATABASE[gpadmin@gptest01 pgbench-tools-master]$ psql -d postgres -c 'create database results'CREATE DATABASE[gpadmin@gptest01 pgbench-tools-master]$ psql -f init/resultdb.sql -d resultsBEGINpsql:init/resultdb.sql:3: NOTICE: table "testset" does not exist, skippingDROP TABLECREATE TABLEpsql:init/resultdb.sql:9: NOTICE: table "tests" does not exist, skippingDROP TABLEpsql:init/resultdb.sql:28: WARNING: referential integrity (FOREIGN KEY) constraints are not supported in Greenplum Database, will not be enforcedCREATE TABLEpsql:init/resultdb.sql:30: NOTICE: table "timing" does not exist, skippingDROP TABLEpsql:init/resultdb.sql:37: NOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'ts' as the Greenplum Database data distribution key for this table.HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.psql:init/resultdb.sql:37: WARNING: referential integrity (FOREIGN KEY) constraints are not supported in Greenplum Database, will not be enforcedCREATE TABLECREATE INDEXpsql:init/resultdb.sql:41: NOTICE: table "test_bgwriter" does not exist, skippingDROP TABLEpsql:init/resultdb.sql:53: WARNING: referential integrity (FOREIGN KEY) constraints are not supported in Greenplum Database, will not be enforcedCREATE TABLECREATE FUNCTIONCREATE FUNCTIONCOMMIT[gpadmin@gptest01 pgbench-tools-master]$ ./newset 'Initial Config'which: no pgbench in (/usr/local/greenplum-cc-web-6.0.0/bin:/usr/local/greenplum-cc-web/./bin:/usr/local/greenplum-db/./bin:/usr/local/greenplum-db/./ext/python/bin:/usr/local/greenplum-db/./bin:/usr/local/greenplum-db/./ext/python/bin:/usr/local/greenplum-cc-web-2.0.0-build-32/bin:/usr/local/greenplum-cc-web/./bin:/usr/lib64/qt-3.3/bin:/usr/local/go/bin:/usr/local/codis//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/go/bin:/usr/local/codis/src/github.com/CodisLabs/codis/bin/:/data/codis/redis/bin:/home/gpadmin/bin)which: no gnuplot in (/usr/local/greenplum-cc-web-6.0.0/bin:/usr/local/greenplum-cc-web/./bin:/usr/local/greenplum-db/./bin:/usr/local/greenplum-db/./ext/python/bin:/usr/local/greenplum-db/./bin:/usr/local/greenplum-db/./ext/python/bin:/usr/local/greenplum-cc-web-2.0.0-build-32/bin:/usr/local/greenplum-cc-web/./bin:/usr/lib64/qt-3.3/bin:/usr/local/go/bin:/usr/local/codis//bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/go/bin:/usr/local/codis/src/github.com/CodisLabs/codis/bin/:/data/codis/redis/bin:/home/gpadmin/bin)INSERT 0 1 set | info -----+---------------- 1 | Initial Config(1 row)[gpadmin@gptest01 pgbench]$ ./pgbench -i pgbenchNOTICE: table "pgbench_history" does not exist, skippingNOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'tid' as the Greenplum Database data distribution key for this table.HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.NOTICE: table "pgbench_tellers" does not exist, skippingNOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'tid' as the Greenplum Database data distribution key for this table.HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.NOTICE: table "pgbench_accounts" does not exist, skippingNOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'aid' as the Greenplum Database data distribution key for this table.HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.NOTICE: table "pgbench_branches" does not exist, skippingNOTICE: Table doesn't have 'DISTRIBUTED BY' clause -- Using column named 'bid' as the Greenplum Database data distribution key for this table.HINT: The 'DISTRIBUTED BY' clause determines the distribution of data. Make sure column(s) chosen are the optimal data distribution key to minimize skew.creating tables...100000 of 100000 tuples (100%) done (elapsed 0.09 s, remaining 0.00 s).vacuum...set primary keys...done.扩大数据规模 1000W[gpadmin@gptest01 pgbench]$ ./pgbench -i pgbench -s 100[gpadmin@gptest01 pgbench]$ ./pgbench -c 10 -t 100 pgbenchstarting vacuum...end.transaction type: TPC-B (sort of)scaling factor: 1query mode: simplenumber of clients: 10number of threads: 1number of transactions per client: 100number of transactions actually processed: 1000/1000latency average: 462.268 mstps = 21.632470 (including connections establishing)tps = 21.680133 (excluding connections establishing)头四行只是报告一些最重要的参数设置。跟着的一行报告完成的事务数和期望完成的事务数(后者只是客户端数乘以事务数);这两个会相等,除非在完成之前运行就失败了。最后两行报告 TPS 速率,分别有计算启动数据库会话时间和不计算启动会话时间的。[gpadmin@gptest01 pgbench]$ $ cat test.sql\set nbranches 1 * :scale\set ntellers 10 * :scale\set naccounts 100000 * :scale\setrandom aid 1 :naccounts\setrandom bid 1 :nbranches\setrandom tid 1 :ntellers\setrandom delta -5000 5000BEGIN;UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;SELECT abalance FROM pgbench_accounts WHERE aid = :aid;UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);END;脚本说明:可以看到脚本中的一个事物包含了update,select,insert操作,不同的操作起到不同的测试目的(1)UPDATE pgbench_accounts:作为最大的表,起到促发磁盘I/O的作用。(2)SELECT abalance:由于上一条UPDATE语句更新一些信息,存在于缓存内用于回应这个查询。(3)UPDATE pgbench_tellers:职员的数量比账号的数量要少得多,所以这个表也很小,并且极有可能存在于内存中。(4)UPDATE pgbench_branches:作为更小的表,内容被缓存,如果用户的环境是数量较小的数据库和多个客户端时,对其锁操作可能会成为性能的瓶颈。(5)INSERT INTO pgbench_history:history表是个附加表,后续并不会进行更新或查询操作,而且也没有任何索引。相对于UPDATE语句,对其的插入操作对磁盘的写入成本也很小。postgres$ pgbench -c 15 -t 300 pgbench -r -f test.sql #执行命令starting vacuum...end.transaction type: Custom queryscaling factor: 1query mode: simple number of clients: 15 #-c参数控制并发量number of threads: 1 number of transactions per client: 300 #每个客户端执行事务的数量number of transactions actually processed: 4500/4500 #总执行量tps = 453.309203 (including connections establishing) #tps每秒钟处理的事务数包含网络开销 tps = 457.358998 (excluding connections establishing) #不包含网络开销statement latencies in milliseconds: #带-r的效果,每个客户端事务具体的执行时间,单位是毫秒0.005198 \set nbranches 1 * :scale 0.001144 \set ntellers 10 * :scale0.001088 \set naccounts 100000 * :scale 0.001400 \setrandom aid 1 :naccounts0.000814 \setrandom bid 1 :nbranches0.000929 \setrandom tid 1 :ntellers0.000981 \setrandom delta -5000 50000.613757 BEGIN;1.027969 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;0.754162 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;14.167980 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;13.587156 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;0.582075 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);1.628262 END;默认的基准测试给出了一个指标TPS,同样的测试参数,tps的值越高,相对来说服务器的性能越好。上面的测试由于数据量的问题,表的内容全部缓存进了内存,磁盘io对上面的结果影响较小。pgbench=# create table pg_test (a1 serial,a2 int,a3 varchar(20),a4 timestamp); #创建测试表postgres$cat pg_test.sqlpgbench=# insert into pg_test(a2,a3,a4) select (random()*(2*10^5)),substr('abcdefghijklmnopqrstuvwxyz',1, (random()*26)::integer),now(); #每个事务插入一条数据 postgres$pgbench -c 90 -T 10 pgbench -r -f pg_test.sql #90个并发测试每秒插入的数据量测试结果截取:number of transactions actually processed: 20196 #10秒钟90并发用户共插入20196条数据,每条数据插入费时42ms,平均每秒插入2000条数据 tps = 1997.514876 (including connections establishing)tps = 2119.279239 (excluding connections establishing)statement latencies in milliseconds:42.217948 pgbench=# select count(*) from pg_test;count -------201965. pgbench在参数调节上的辅助使用简单举例:work_mempostgres=# show work_mem ; #数据库当前的work_memwork_mem ----------1MB查询样本:postgres$cat select.sqlSELECT customerid FROM customers ORDER BY zip; #orders表是一张postgres样例表,样例库全名dellstore2postgres$pgbench -c 90 -T 5 pgbench -r -f select.sql #多用户并发做单表排序操作单个事务执行的时间可能会很大,但是平均事务执行时间和单个用户的执行时间差距没那么明显。执行结果截取number of clients: 90number of threads: 1duration: 5 snumber of transactions actually processed: 150tps = 26.593887 (including connections establishing)tps = 27.972988 (excluding connections establishing)statement latencies in milliseconds:3115.754673 SELECT customerid FROM customers ORDER BY zip;测试环境相同调节work_mem参数为2M试试number of clients: 90number of threads: 1duration: 5 snumber of transactions actually processed: 243tps = 44.553026 (including connections establishing)tps = 47.027276 (excluding connections establishing)statement latencies in milliseconds:1865.636761 SELECT customerid FROM customers ORDER BY zip; #5s内事务执行的总量明显增加一共做了243次单表排序原因分析,由于排序操作会关系到work_mem,排序操作能全在缓存中进行当然速度会明显加快,查看执行计划postgres=# explain analyze SELECT customerid FROM customers ORDER BY zip;QUERY PLAN --------------------------------------------------------------------------------------------Sort (cost=2116.77..2166.77 rows=20000 width=8) (actual time=42.536..46.117 rows=20000 loops=1)Sort Key: zipSort Method: external sort Disk: 352kB-> Seq Scan on customers (cost=0.00..688.00 rows=20000 width=8) (actual time=0.013..8.942 rows=20000 loops=1)Total runtime: 48.858 ms由上面的执行计划可以看出在work_mem大小为1M的时候排序一共需要1.352M空间做排序,所以加大work_mem参数排序速度明显增加。这只是个简单的例子,work_mem的大小调节还有很多其他方面要考虑的,比如在高并发的情况下,需要为每个用户分配同样大小的排序空间,会占用大量的内存空间。参数调节在任何时候保持一个均衡才是应该考虑的。这里简单测试一个点查的脚本.构造环境:postgres=# create table test( postgres(# id int8 primary key, postgres(# info text default 'tessssssssssssssssssssssssssssssssssssst', postgres(# state int default 0, postgres(# crt_time timestamp default now(), postgres(# mod_time timestamp default now() postgres(# ); CREATE TABLEpostgres=# insert into test select generate_series(1,10000000); INSERT 0 10000000构建脚本:vi test.sql /set id random(1,100000000) select * from test where id=:id;测试:pg12@isdtest-> pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 60 ......transaction type: ./test.sqlscaling factor: 1query mode: preparednumber of clients: 32number of threads: 32duration: 60 snumber of transactions actually processed: 4931686latency average = 0.388 mslatency stddev = 1.248 mstps = 82187.250560 (including connections establishing)tps = 82200.894111 (excluding connections establishing)statement latencies in milliseconds: 0.001 /set id random(1,10000000) 0.385 select * from test where id=:id;测试bloghttps://yq.aliyun.com/articles/25812

三、插入、删除、更新

并发的update会报一个错误讯息

concurrent updates distribution keys on the same row is not allowed。

原厂解读

"

根据这个日志看,大致是这样的问题:

更新分布键,在gpdb里是用一个叫做split update的技术实现的(类似postgres最新版本的更新partitionkey)。

即是在原segment上执行Delete,在新segment上执行Insert。这个技巧有局限性,就是无法根据chain,追踪到新的tuple。(支持更新distkey,已经很少有MPP数据库的支持)

追踪不到,就挡不住新segment上的insert操作,因此这里选择报错。使得整个事务回滚。保证一致性。

"

打开ORAC优化器后报错复现了,update的并不是diskey为何会报错呢?

原来orca实现update都是用的splitupdate这个技术。无论是否更新distkey。本质是splitupdate引发的。

此外打开ORAC TPC并没有得到提升,反而下降了不少。这是因为单笔操作使用ORAC优化器的代价更大。

 

四、同一heap表并发update/delete是否相互阻塞?

 

psq1 开启一个事务,开启删除操作。

qmstst=# begin;BEGINqmstst=# delete from locktest where id=3;DELETE 1

psq2 开启一个事务,更新操作。

 

qmstst=# begin;BEGINqmstst=# qmstst=# update locktest set cname=99999 where id=70;UPDATE 1

可以看到对于同一张表支持并发的delete/update,这得益于gp_enable_global_deadlock_detector 全局死锁检测。

该表上都是RowExclusiveLock 锁。这不会相互block,gp4是做不到的,在同一张堆表,不管是否为同一行都会相互block的。

qmstst=# select * from pg_locks where relation = 78251; locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction |  pid  |       mode       | granted | fastpath | mppsessionid | mppiswriter | gp_segment_id ----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+------------------+---------+----------+--------------+-------------+--------------- relation |    16384 |    78251 |      |       |            |               |         |       |          | 9/221              | 20868 | RowExclusiveLock | t       | t        |         6816 | t           |            -1 relation |    16384 |    78251 |      |       |            |               |         |       |          | 4/6449             | 18573 | RowExclusiveLock | t       | t        |         6819 | t           |            -1 relation |    16384 |    78251 |      |       |            |               |         |       |          | 13/201             | 10047 | RowExclusiveLock | t       | t        |         6819 | t           |             0 relation |    16384 |    78251 |      |       |            |               |         |       |          | 12/543             | 10875 | RowExclusiveLock | t       | t        |         6816 | t           |             2(4 rows)

但是对于同一张表同一行数据呢? 

psq1 开启一个事务,开启删除head表操作。

 
qmstst=# begin;BEGINTime: 0.242 msqmstst=# delete from locktest where id=65;DELETE 1Time: 11.232 msqmstst=#

psq2 开启一个事务,更新该heap表的同一行,可以看到会block主,等delete操作做完,事实上此时是在表上加了exclusive锁。

qmstst=# begin;BEGINqmstst=# \timingTiming is on.qmstst=# update locktest set cname=99999654645 where id=65;UPDATE 0Time: 22968.986 ms

 五、全局死锁检测原理

在Greenplum 6之前,greenplum里执行dml会带着全表写锁执行,既然是表级别锁,可。那么就无所谓并行了,也无从谈起OLTP。greenplum 6 开始,gp_enable_global_deadlock_detector参数可以打开死锁检查,dml执行时候就会是行锁而非表锁了。

先在测试环境模拟单节点死锁现象

在模拟的时候一定要注意死锁产生的四个条件,刚开始模拟的时候还不能领会其原理导致不能重现死锁,在GP4中DML是表锁,因此也就不存在死锁了。

psql1

qmstst=# begin;BEGINTime: 0.210 msqmstst=# update locktest set cname='99999' where id=3;UPDATE 1Time: 1.702 ms

psq2

qmstst=# begin;BEGINTime: 0.203 msqmstst=# update  locktest set cname = '888888' where id=4;UPDATE 1Time: 2.285 msqmstst=# update  locktest set cname = '888888' where id=3;UPDATE 1Time: 6986.422 ms

然后再回到psql1

qmstst=# update locktest set cname='99999' where id=4;ERROR:  deadlock detected  (seg0 10.50.10.170:6000 pid=24516)DETAIL:  Process 24516 waits for ShareLock on transaction 3127071; blocked by process 25819.Process 25819 waits for ShareLock on transaction 3127066; blocked by process 24516.HINT:  See server log for query details.CONTEXT:  while updating tuple (0,2) in relation "locktest"Time: 1002.614 ms

 这时死锁发生了。两个update事务相互等待对方持有的lock,最后一个update就会报dead lock,对应上图的第四步。

可以看到在psql1中执行update时立即被检测出deadlock,和这个参数设定有关,到时候可以和集群死锁比对来理解。

qmstst=# show  deadlock_timeout; deadlock_timeout ------------------ 1s(1 row)

 

Postgres使用死锁检测器来处理死锁问题。死锁检测器负责检测死锁并打破死锁。检测器使用等待图(wait-for graph)来为不同后端进程之间的等待关系建模。图的节点由进程标识符pid标识。节点A到节点B的边表示节点A正在等待由节点B持有的锁。

 

 

Postgresql死锁检测器的基本思想如下:

 

  • 如果获取锁失败,进程将进入睡眠模式。

  • SIGALARM信号用于在超时后唤醒进程。

  • SIGALARM处理程序将检查PROCLOCK共享内存以构建等待图。以当前进程为起点,检查是否存在环。环意味着发生死锁。当前进程会主动退出以打破死锁。Postgres死锁检测器可以处理本地死锁问题。

 

分布式集群中的死锁

那么分布式集群中的死锁又是怎么样的?集群和单节点有什么区别?

 

让我们从一个例子开始进行讲解。下图中,我们有包含一个主节点和两个从节点的集群。假设我们有两个并发的分布式事务。首先,分布式事务1在节点A上运行,然后事务2在节点B上运行。接着,事务1要在由事务2阻塞的节点B上运行,因此分布式事务1将被挂起。同时,假设事务2也尝试在被本地事务1阻塞的节点A上运行,则分布式事务2也将挂起。这种情况下就会发生死锁。

请注意,节点A或节点B上都没有死锁,但是死锁确实出现了。从主节点的角度来看,这就是所谓的全局死锁。

在测试区模拟

两笔数据分散在不同node

qmstst=# select gp_segment_id,* from locktest where id in(73,16); gp_segment_id | id | cname |              remark              ---------------+----+-------+----------------------------------             2 | 73 | MV CC | 497f9db67916e4d69a6ee114abb78a01             3 | 16 | MV CC | 497f9db67916e4d69a6ee114abb78a01(2 rows)

psql1

qmstst=# \timing\Timing is on.Invalid command \. Try \? for help.qmstst=# begin;BEGINTime: 0.168 msqmstst=# update  locktest set cname = '11111' where id = '16';UPDATE 1Time: 23.146 ms

psql2

qmstst=# begin;BEGINTime: 0.190 msqmstst=# update  locktest set cname = '11111' where id = '73';UPDATE 1Time: 10.405 msqmstst=# update  locktest set cname = '11111' where id = '16';UPDATE 1Time: 62585.963 ms

第二次update 耗时很久,那是因为在做全局死锁检测

回到psql1 update id=73。

qmstst=# update  locktest set cname = '11111' where id = '73';ERROR:  canceling statement due to user request: "cancelled by global deadlock detector"Time: 53075.021 ms

psql2 的updata 会一直block,直到触发死锁检测机制(单节点或者集群)。

Note: Greenplum数据库通过配置如下参数指定本地死锁检测的间隔。由于本地死锁检测和全局死锁检测算法的不同,被死锁检测器终止的进程也不同,这取决于本地死锁检测和全局死锁检测哪个先被触发。(请注意,节点A或节点B上都没有死锁,但是死锁确实出现了。从主节点的角度来看,这就是所谓的全局死锁。)

qmstst=# show  deadlock_timeout; deadlock_timeout ------------------ 1s(1 row)qmstst=# show gp_global_deadlock_detector_period; gp_global_deadlock_detector_period ------------------------------------ 2min(1 row)

其实可以发现本地死锁检查很快就会被detected,而全局死锁检测需要时间比较久一点,我想这应该是其算法复杂度较本地高。具体细节可参考2.

 

参考:

1、

2、

 

转载地址:http://sthqj.baihongyu.com/

你可能感兴趣的文章
基于Springboot的社区开发项目
查看>>
nowcoder 左神算法1
查看>>
code刷题
查看>>
左神进阶2窗口
查看>>
dubbo入门
查看>>
http 错误类型
查看>>
一篇文章解决HTTP 请求头!
查看>>
学习日记02
查看>>
学习日记03
查看>>
学习日记04
查看>>
学习日记08(元组、字典、集合)
查看>>
js自定义数据顺序进行升序或者降序排序
查看>>
【零】简单数仓框架优化、配置及基准测试
查看>>
【零】Linux中MySQL安装
查看>>
Sqoop的安装及测试
查看>>
Kylin的简单使用
查看>>
Presto的概念和安装使用
查看>>
Druid的Web页面使用
查看>>
Scala-HelloWorld
查看>>
Scala-IDEA中环境部署
查看>>