标簽
PostgreSQL , cpu 并行 , smp 并行 , 并行計算 , gpu 并行 , 并行過程支援
https://github.com/digoal/blog/blob/master/201903/20190317_15.md#%E8%83%8C%E6%99%AF 背景
PostgreSQL 11 優化器已經支援了非常多場合的并行。簡單估計,已支援27餘種場景的并行計算。
parallel seq scan
parallel index scan
parallel index only scan
parallel bitmap scan
parallel filter
parallel hash agg
parallel group agg
parallel cte
parallel subquery
parallel create table
parallel create index
parallel select into
parallel CREATE MATERIALIZED VIEW
parallel 排序 : gather merge
parallel nestloop join
parallel hash join
parallel merge join
parallel 自定義并行聚合
parallel 自定義并行UDF
parallel append
parallel append merge
parallel union
parallel fdw table scan
parallel partition join
parallel partition agg
parallel gather
parallel gather merge
parallel rc 并行
parallel rr 并行
parallel GPU 并行
parallel unlogged table
lead parallel
接下來進行一一介紹。
關鍵知識請先自行了解:
1、優化器自動并行度算法 CBO
《PostgreSQL 9.6 并行計算 優化器算法淺析》 《PostgreSQL 11 并行計算算法,參數,強制并行度設定》https://github.com/digoal/blog/blob/master/201903/20190317_15.md#parallel-append-merge parallel append merge
多段并行執行并且排序
例如分區表的操作,當一個QUERY涉及多個分區時,每個分區的執行部分為一個獨立段,多個分區可以并行執行,優化器支援結果并行 append。
如果多段執行的結果需要排序,那麼優化器可以在每個段内傳回有序結果,可以使用歸并排序(類似merge sort, gather merge)(parallel append merge)。
parallel append與gather merge同時出現,說明使用了MergeAppend。
src/backend/executor/nodeMergeAppend.c
/* INTERFACE ROUTINES
* ExecInitMergeAppend - initialize the MergeAppend node
* ExecMergeAppend - retrieve the next tuple from the node
* ExecEndMergeAppend - shut down the MergeAppend node
* ExecReScanMergeAppend - rescan the MergeAppend node
*
* NOTES
* A MergeAppend node contains a list of one or more subplans.
* These are each expected to deliver tuples that are sorted according
* to a common sort key. The MergeAppend node merges these streams
* to produce output sorted the same way.
*
* MergeAppend nodes don't make use of their left and right
* subtrees, rather they maintain a list of subplans so
* a typical MergeAppend node looks like this in the plan tree:
*
* ...
* /
* MergeAppend---+------+------+--- nil
* / \ | | |
* nil nil ... ... ...
* subplans
*/
資料量:10億
場景 | 資料量 | 關閉并行 | 開啟并行 | 并行度 | 開啟并行性能提升倍數 |
---|---|---|---|---|---|
10億 | 99.4 秒 | 5.87 秒 | 24 | 16.93 倍 |
由于parallel append可能與SCAN parallel重複使用,為了避免重複,可以将分區表的parallel_workers設定為0,同時使用hit強制設定主表的parallel,這樣就可以達到設定parallel append并行度的目的。
https://www.openscg.com/bigsql/docs/hintplan/git clone https://github.com/ossc-db/pg_hint_plan
cd pg_hint_plan/
git checkout PG11
USE_PGXS=1 make
USE_PGXS=1 make install
vi $PGDATA/postgresql.conf
shared_preload_libraries = 'pg_hint_plan'
pg_ctl restart -m fast
例子,24個分區的HASH分區表。
CREATE unlogged TABLE ccc (
order_id bigint not null,
cust_id bigint not null,
status text
) PARTITION BY HASH (cust_id);
do language plpgsql $$
declare
begin
for i in 0..23 loop
execute format('CREATE unlogged TABLE %s%s PARTITION OF %s FOR VALUES WITH (MODULUS %s, REMAINDER %s)', 'ccc', i, 'ccc', 24, i);
execute format('alter table %s%s set(parallel_workers =0)', 'ccc',i);
end loop;
end;
$$;
postgres=# \d ccc
Unlogged table "public.ccc"
Column | Type | Collation | Nullable | Default
----------+--------+-----------+----------+---------
order_id | bigint | | not null |
cust_id | bigint | | not null |
status | text | | |
Partition key: HASH (cust_id)
Number of partitions: 24 (Use \d+ to list them.)
寫入10億資料
insert into ccc select i, random()*960 from generate_series(1,1000000000) t(i);
vacuum (analyze,verbose) ccc;
postgres=# show max_worker_processes ;
max_worker_processes
----------------------
128
(1 row)
postgres=# set min_parallel_table_scan_size =0;
postgres=# set min_parallel_index_scan_size =0;
postgres=# set parallel_tuple_cost =0;
postgres=# set parallel_setup_cost =0;
postgres=# set max_parallel_workers=128;
postgres=# set max_parallel_workers_per_gather =24;
postgres=# set enable_parallel_hash =on;
postgres=# set enable_parallel_append =on;
postgres=# set enable_partitionwise_aggregate =off;
postgres=# set work_mem ='128MB';
https://github.com/digoal/blog/blob/master/201903/20190317_15.md#1%E5%85%B3%E9%97%AD%E5%B9%B6%E8%A1%8C%E8%80%97%E6%97%B6-994-%E7%A7%92 1、關閉并行,耗時: 99.4 秒。
postgres=# set max_parallel_workers_per_gather =0;
postgres=# set enable_parallel_append =off;
postgres=# explain select * from ccc order by order_id limit 10;
QUERY PLAN
------------------------------------------------------------------------------------
Limit (cost=42015064.65..42015064.67 rows=10 width=48)
-> Sort (cost=42015064.65..44515064.93 rows=1000000114 width=48)
Sort Key: ccc0.order_id
-> Append (cost=0.00..20405421.71 rows=1000000114 width=48)
-> Seq Scan on ccc0 (cost=0.00..641839.96 rows=41663296 width=48)
-> Seq Scan on ccc1 (cost=0.00..625842.88 rows=40624888 width=48)
-> Seq Scan on ccc2 (cost=0.00..722107.36 rows=46873636 width=48)
-> Seq Scan on ccc3 (cost=0.00..545575.32 rows=35414332 width=48)
-> Seq Scan on ccc4 (cost=0.00..657705.92 rows=42693192 width=48)
-> Seq Scan on ccc5 (cost=0.00..609836.16 rows=39585616 width=48)
-> Seq Scan on ccc6 (cost=0.00..625934.32 rows=40630732 width=48)
-> Seq Scan on ccc7 (cost=0.00..673876.80 rows=43742880 width=48)
-> Seq Scan on ccc8 (cost=0.00..601729.04 rows=39059604 width=48)
-> Seq Scan on ccc9 (cost=0.00..609919.96 rows=39591296 width=48)
-> Seq Scan on ccc10 (cost=0.00..674124.76 rows=43758976 width=48)
-> Seq Scan on ccc11 (cost=0.00..529544.24 rows=34373924 width=48)
-> Seq Scan on ccc12 (cost=0.00..818443.04 rows=53127004 width=48)
-> Seq Scan on ccc13 (cost=0.00..674104.80 rows=43757680 width=48)
-> Seq Scan on ccc14 (cost=0.00..786195.28 rows=51033728 width=48)
-> Seq Scan on ccc15 (cost=0.00..609709.04 rows=39577604 width=48)
-> Seq Scan on ccc16 (cost=0.00..633745.96 rows=41137896 width=48)
-> Seq Scan on ccc17 (cost=0.00..673951.76 rows=43747376 width=48)
-> Seq Scan on ccc18 (cost=0.00..802394.72 rows=52085272 width=48)
-> Seq Scan on ccc19 (cost=0.00..529621.20 rows=34378920 width=48)
-> Seq Scan on ccc20 (cost=0.00..642042.32 rows=41676432 width=48)
-> Seq Scan on ccc21 (cost=0.00..401251.50 rows=26046150 width=48)
-> Seq Scan on ccc22 (cost=0.00..673891.04 rows=43743804 width=48)
-> Seq Scan on ccc23 (cost=0.00..642033.76 rows=41675876 width=48)
(28 rows)
postgres=# select * from ccc order by order_id limit 10;
order_id | cust_id | status
----------+---------+--------
1 | 649 |
2 | 226 |
3 | 816 |
4 | 844 |
5 | 827 |
6 | 456 |
7 | 810 |
8 | 365 |
9 | 49 |
10 | 75 |
(10 rows)
Time: 99416.529 ms (01:39.417)
https://github.com/digoal/blog/blob/master/201903/20190317_15.md#2%E5%BC%80%E5%90%AF%E5%B9%B6%E8%A1%8C%E8%80%97%E6%97%B6-587-%E7%A7%92 2、開啟并行,耗時: 5.87 秒。
postgres=# set max_parallel_workers_per_gather =24;
postgres=# set enable_parallel_append =on;
postgres=# set client_min_messages =debug;
postgres=# set pg_hint_plan.debug_print =on;
postgres=# set pg_hint_plan.enable_hint=on;
postgres=# set pg_hint_plan.message_level =debug;
postgres=# explain /*+ Parallel(ccc 24 hard) */ select * from ccc order by order_id limit 10;
DEBUG: pg_hint_plan:
used hint:
Parallel(ccc 24 hard)
not used hint:
duplication hint:
error hint:
QUERY PLAN
------------------------------------------------------------------------------------------
Limit (cost=4321929.13..4321929.39 rows=10 width=48)
-> Gather Merge (cost=4321929.13..128274490.70 rows=4800000504 width=48)
Workers Planned: 24
-> Sort (cost=4321928.55..4821928.60 rows=200000021 width=48)
Sort Key: ccc12.order_id
-> Parallel Append (cost=0.00..0.00 rows=200000021 width=48)
-> Seq Scan on ccc12 (cost=0.00..818443.04 rows=53127004 width=48)
-> Seq Scan on ccc18 (cost=0.00..802394.72 rows=52085272 width=48)
-> Seq Scan on ccc14 (cost=0.00..786195.28 rows=51033728 width=48)
-> Seq Scan on ccc2 (cost=0.00..722107.36 rows=46873636 width=48)
-> Seq Scan on ccc10 (cost=0.00..674124.76 rows=43758976 width=48)
-> Seq Scan on ccc13 (cost=0.00..674104.80 rows=43757680 width=48)
-> Seq Scan on ccc17 (cost=0.00..673951.76 rows=43747376 width=48)
-> Seq Scan on ccc22 (cost=0.00..673891.04 rows=43743804 width=48)
-> Seq Scan on ccc7 (cost=0.00..673876.80 rows=43742880 width=48)
-> Seq Scan on ccc4 (cost=0.00..657705.92 rows=42693192 width=48)
-> Seq Scan on ccc20 (cost=0.00..642042.32 rows=41676432 width=48)
-> Seq Scan on ccc23 (cost=0.00..642033.76 rows=41675876 width=48)
-> Seq Scan on ccc0 (cost=0.00..641839.96 rows=41663296 width=48)
-> Seq Scan on ccc16 (cost=0.00..633745.96 rows=41137896 width=48)
-> Seq Scan on ccc6 (cost=0.00..625934.32 rows=40630732 width=48)
-> Seq Scan on ccc1 (cost=0.00..625842.88 rows=40624888 width=48)
-> Seq Scan on ccc9 (cost=0.00..609919.96 rows=39591296 width=48)
-> Seq Scan on ccc5 (cost=0.00..609836.16 rows=39585616 width=48)
-> Seq Scan on ccc15 (cost=0.00..609709.04 rows=39577604 width=48)
-> Seq Scan on ccc8 (cost=0.00..601729.04 rows=39059604 width=48)
-> Seq Scan on ccc3 (cost=0.00..545575.32 rows=35414332 width=48)
-> Seq Scan on ccc19 (cost=0.00..529621.20 rows=34378920 width=48)
-> Seq Scan on ccc11 (cost=0.00..529544.24 rows=34373924 width=48)
-> Seq Scan on ccc21 (cost=0.00..401251.50 rows=26046150 width=48)
(30 rows)
postgres=# /*+ Parallel(ccc 24 hard) */ select * from ccc order by order_id limit 10;
DEBUG: pg_hint_plan:
used hint:
Parallel(ccc 24 hard)
not used hint:
duplication hint:
error hint:
order_id | cust_id | status
----------+---------+--------
1 | 649 |
2 | 226 |
3 | 816 |
4 | 844 |
5 | 827 |
6 | 456 |
7 | 810 |
8 | 365 |
9 | 49 |
10 | 75 |
(10 rows)
Time: 5868.558 ms (00:05.869)
期待将來的PG版本在parallel append的并行度上有更好的控制參數。或者有更好的避免與SCAN parallel重複使用的情況。
https://github.com/digoal/blog/blob/master/201903/20190317_15.md#%E5%85%B6%E4%BB%96%E7%9F%A5%E8%AF%86 其他知識
2、function, op 識别是否支援parallel
postgres=# select proparallel,proname from pg_proc;
proparallel | proname
-------------+----------------------------------------------
s | boolin
s | boolout
s | byteain
s | byteaout
3、subquery mapreduce unlogged table
對于一些情況,如果期望簡化優化器對非常非常複雜的SQL并行優化的負擔,可以自己将SQL拆成幾段,中間結果使用unlogged table儲存,類似mapreduce的思想。unlogged table同樣支援parallel 計算。
4、vacuum,垃圾回收并行。
5、dblink 異步調用并行
《PostgreSQL VOPS 向量計算 + DBLINK異步并行 - 單執行個體 10億 聚合計算跑進2秒》 《PostgreSQL 相似搜尋分布式架構設計與實踐 - dblink異步調用與多機并行(遠端 遊标+記錄 UDF執行個體)》 《PostgreSQL dblink異步調用實作 并行hash分片JOIN - 含資料交、并、差 提速案例 - 含dblink VS pg 11 parallel hash join VS pg 11 智能分區JOIN》暫時不允許并行的場景(将來PG會繼續擴大支援範圍):
1、修改行,鎖行,除了create table as , select into, create mview這幾個可以使用并行。
2、query 會被中斷時,例如cursor , loop in PL/SQL ,因為涉及到中間處理,是以不建議開啟并行。
3、paralle unsafe udf ,這種UDF不會并行
4、嵌套并行(udf (内部query并行)),外部調用這個UDF的SQL不會并行。(主要是防止large parallel workers )
5、SSI 隔離級别
https://github.com/digoal/blog/blob/master/201903/20190317_15.md#%E5%8F%82%E8%80%83 參考
https://www.postgresql.org/docs/11/parallel-plans.html 《PostgreSQL 11 preview - 并行計算 增強 彙總》 《PostgreSQL 10 自定義并行計算聚合函數的原理與實踐 - (含array_agg合并多個數組為單個一進制數組的例子)》https://github.com/digoal/blog/blob/master/201903/20190317_15.md#%E5%85%8D%E8%B4%B9%E9%A2%86%E5%8F%96%E9%98%BF%E9%87%8C%E4%BA%91rds-postgresql%E5%AE%9E%E4%BE%8Becs%E8%99%9A%E6%8B%9F%E6%9C%BA 免費領取阿裡雲RDS PostgreSQL執行個體、ECS虛拟機
