标签
PostgreSQL , cpu 并行 , smp 并行 , 并行计算 , gpu 并行 , 并行过程支持
https://github.com/digoal/blog/blob/master/201903/20190317_14.md#%E8%83%8C%E6%99%AF 背景
PostgreSQL 11 优化器已经支持了非常多场合的并行。简单估计,已支持27余种场景的并行计算。
parallel seq scan
parallel index scan
parallel index only scan
parallel bitmap scan
parallel filter
parallel hash agg
parallel group agg
parallel cte
parallel subquery
parallel create table
parallel create index
parallel select into
parallel CREATE MATERIALIZED VIEW
parallel 排序 : gather merge
parallel nestloop join
parallel hash join
parallel merge join
parallel 自定义并行聚合
parallel 自定义并行UDF
parallel append
parallel union
parallel fdw table scan
parallel partition join
parallel partition agg
parallel gather
parallel gather merge
parallel rc 并行
parallel rr 并行
parallel GPU 并行
parallel unlogged table
lead parallel
接下来进行一一介绍。
关键知识请先自行了解:
1、优化器自动并行度算法 CBO
《PostgreSQL 9.6 并行计算 优化器算法浅析》 《PostgreSQL 11 并行计算算法,参数,强制并行度设置》https://github.com/digoal/blog/blob/master/201903/20190317_14.md#parallel-append parallel append
多段并行执行
例如分区表的操作,当一个QUERY涉及多个分区时,每个分区的执行部分为一个独立段,多个分区可以并行执行,优化器支持结果并行 append。
数据量:10亿
场景 | 数据量 | 关闭并行 | 开启并行 | 并行度 | 开启并行性能提升倍数 |
---|---|---|---|---|---|
10亿 | 70.5 秒 | 3.16 秒 | 24 | 22.3 倍 |
postgres=# show max_worker_processes ;
max_worker_processes
----------------------
128
(1 row)
postgres=# set min_parallel_table_scan_size =0;
postgres=# set min_parallel_index_scan_size =0;
postgres=# set parallel_tuple_cost =0;
postgres=# set parallel_setup_cost =0;
postgres=# set max_parallel_workers=128;
postgres=# set max_parallel_workers_per_gather =24;
postgres=# set enable_parallel_hash =on;
postgres=# set enable_parallel_append =on;
postgres=# set enable_partitionwise_aggregate =off;
postgres=# set work_mem ='128MB';
https://github.com/digoal/blog/blob/master/201903/20190317_14.md#1%E5%85%B3%E9%97%AD%E5%B9%B6%E8%A1%8C%E8%80%97%E6%97%B6-705-%E7%A7%92 1、关闭并行,耗时: 70.5 秒。
postgres=# set max_parallel_workers_per_gather =0;
postgres=# set enable_parallel_append =off;
postgres=# explain select count(*) from ccc where order_id=1;
QUERY PLAN
----------------------------------------------------------------------
Aggregate (cost=17905421.61..17905421.62 rows=1 width=8)
-> Append (cost=0.00..17905421.55 rows=24 width=0)
-> Seq Scan on ccc0 (cost=0.00..745998.20 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc1 (cost=0.00..727405.10 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc2 (cost=0.00..839291.45 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc3 (cost=0.00..634111.15 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc4 (cost=0.00..764438.90 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc5 (cost=0.00..708800.20 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc6 (cost=0.00..727511.15 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc7 (cost=0.00..783234.00 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc8 (cost=0.00..699378.05 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc9 (cost=0.00..708898.20 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc10 (cost=0.00..783522.20 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc11 (cost=0.00..615479.05 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc12 (cost=0.00..951260.55 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc13 (cost=0.00..783499.00 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc14 (cost=0.00..913779.60 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc15 (cost=0.00..708653.05 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc16 (cost=0.00..736590.70 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc17 (cost=0.00..783320.20 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc18 (cost=0.00..932607.90 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc19 (cost=0.00..615568.50 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc20 (cost=0.00..746233.40 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc21 (cost=0.00..466366.88 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc22 (cost=0.00..783250.55 rows=1 width=0)
Filter: (order_id = 1)
-> Seq Scan on ccc23 (cost=0.00..746223.45 rows=1 width=0)
Filter: (order_id = 1)
(50 rows)
postgres=# select count(*) from ccc where order_id=1;
count
-------
1
(1 row)
Time: 70514.708 ms (01:10.515)
https://github.com/digoal/blog/blob/master/201903/20190317_14.md#2%E5%BC%80%E5%90%AF%E5%B9%B6%E8%A1%8C%E8%80%97%E6%97%B6-316-%E7%A7%92 2、开启并行,耗时: 3.16 秒。
postgres=# set max_parallel_workers_per_gather =24;
postgres=# set enable_parallel_append =on;
postgres=# explain select count(*) from ccc where order_id=1;
QUERY PLAN
-------------------------------------------------------------------------------------
Aggregate (cost=5926253.57..5926253.58 rows=1 width=8)
-> Gather (cost=0.00..5926253.51 rows=24 width=0)
Workers Planned: 24
-> Parallel Append (cost=0.00..5926253.51 rows=24 width=0)
-> Parallel Seq Scan on ccc12 (cost=0.00..314843.31 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc18 (cost=0.00..308669.75 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc14 (cost=0.00..302438.07 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc2 (cost=0.00..277784.35 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc10 (cost=0.00..259326.13 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc13 (cost=0.00..259318.46 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc17 (cost=0.00..259263.09 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc22 (cost=0.00..259236.23 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc7 (cost=0.00..259230.75 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc4 (cost=0.00..253010.04 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc20 (cost=0.00..246984.48 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc23 (cost=0.00..246981.19 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc0 (cost=0.00..246906.63 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc16 (cost=0.00..243792.99 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc6 (cost=0.00..240788.84 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc1 (cost=0.00..240752.80 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc9 (cost=0.00..234627.47 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc5 (cost=0.00..234597.51 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc15 (cost=0.00..234546.34 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc8 (cost=0.00..231476.54 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc3 (cost=0.00..209876.96 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc19 (cost=0.00..203737.69 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc11 (cost=0.00..203708.09 rows=1 width=0)
Filter: (order_id = 1)
-> Parallel Seq Scan on ccc21 (cost=0.00..154355.70 rows=1 width=0)
Filter: (order_id = 1)
(52 rows)
postgres=# select count(*) from ccc where order_id=1;
count
-------
1
(1 row)
Time: 3163.179 ms (00:03.163)
注意,如果所有并行执行的分段的执行结果加起来结果集很大,append的结果会非常大,那么性能瓶颈可能会在Parallel Append 的节点上。
所以本例在where中加了一个filter,使得每个分段的结果集很小,Parallel Append节点不会成为瓶颈。性能提升非常明显。
通常,如果分段是分区表的话,结合其他的并行优化,enable_partitionwise_aggregate, enable_partitionwise_join,让分段结果集尽量小,这样就可以提高整体性能。
https://github.com/digoal/blog/blob/master/201903/20190317_14.md#%E5%85%B6%E4%BB%96%E7%9F%A5%E8%AF%86 其他知识
2、function, op 识别是否支持parallel
postgres=# select proparallel,proname from pg_proc;
proparallel | proname
-------------+----------------------------------------------
s | boolin
s | boolout
s | byteain
s | byteaout
3、subquery mapreduce unlogged table
对于一些情况,如果期望简化优化器对非常非常复杂的SQL并行优化的负担,可以自己将SQL拆成几段,中间结果使用unlogged table保存,类似mapreduce的思想。unlogged table同样支持parallel 计算。
4、vacuum,垃圾回收并行。
5、dblink 异步调用并行
《PostgreSQL VOPS 向量计算 + DBLINK异步并行 - 单实例 10亿 聚合计算跑进2秒》 《PostgreSQL 相似搜索分布式架构设计与实践 - dblink异步调用与多机并行(远程 游标+记录 UDF实例)》 《PostgreSQL dblink异步调用实现 并行hash分片JOIN - 含数据交、并、差 提速案例 - 含dblink VS pg 11 parallel hash join VS pg 11 智能分区JOIN》暂时不允许并行的场景(将来PG会继续扩大支持范围):
1、修改行,锁行,除了create table as , select into, create mview这几个可以使用并行。
2、query 会被中断时,例如cursor , loop in PL/SQL ,因为涉及到中间处理,所以不建议开启并行。
3、paralle unsafe udf ,这种UDF不会并行
4、嵌套并行(udf (内部query并行)),外部调用这个UDF的SQL不会并行。(主要是防止large parallel workers )
5、SSI 隔离级别
https://github.com/digoal/blog/blob/master/201903/20190317_14.md#%E5%8F%82%E8%80%83 参考
https://www.postgresql.org/docs/11/parallel-plans.html 《PostgreSQL 11 preview - 并行计算 增强 汇总》 《PostgreSQL 10 自定义并行计算聚合函数的原理与实践 - (含array_agg合并多个数组为单个一元数组的例子)》https://github.com/digoal/blog/blob/master/201903/20190317_14.md#%E5%85%8D%E8%B4%B9%E9%A2%86%E5%8F%96%E9%98%BF%E9%87%8C%E4%BA%91rds-postgresql%E5%AE%9E%E4%BE%8Becs%E8%99%9A%E6%8B%9F%E6%9C%BA 免费领取阿里云RDS PostgreSQL实例、ECS虚拟机
