标簽
PostgreSQL , cpu 并行 , smp 并行 , 并行計算 , gpu 并行 , 并行過程支援
https://github.com/digoal/blog/blob/master/201903/20190318_04.md#%E8%83%8C%E6%99%AF 背景
PostgreSQL 11 優化器已經支援了非常多場合的并行。簡單估計,已支援27餘種場景的并行計算。
parallel seq scan
parallel index scan
parallel index only scan
parallel bitmap scan
parallel filter
parallel hash agg
parallel group agg
parallel cte
parallel 遞歸查詢, 樹狀查詢, 異構查詢, CTE, recursive CTE, connect by
parallel subquery
parallel create table
parallel create index
parallel CREATE INDEX CONCURRENTLY - 不堵塞讀寫
parallel select into
parallel CREATE MATERIALIZED VIEW
parallel 排序 : gather merge
parallel nestloop join
parallel hash join
parallel merge join
parallel 自定義并行聚合
parallel 自定義并行UDF
parallel append
parallel append merge
parallel union all
parallel fdw table scan
parallel partition join
parallel partition agg
parallel gather
parallel gather merge
parallel rc 并行
parallel rr 并行
parallel GPU 并行
parallel unlogged table
lead parallel
接下來進行一一介紹。
關鍵知識請先自行了解:
1、優化器自動并行度算法 CBO
《PostgreSQL 9.6 并行計算 優化器算法淺析》 《PostgreSQL 11 并行計算算法,參數,強制并行度設定》https://github.com/digoal/blog/blob/master/201903/20190318_04.md#parallel-%E9%80%92%E5%BD%92%E6%9F%A5%E8%AF%A2-%E6%A0%91%E7%8A%B6%E6%9F%A5%E8%AF%A2-%E5%BC%82%E6%9E%84%E6%9F%A5%E8%AF%A2-cte-recursive-cte-connect-by parallel 遞歸查詢, 樹狀查詢, 異構查詢, CTE, recursive CTE, connect by
支援并行遞歸查詢
資料量:異構資料1億,日志資料10億
場景 | 資料量 | 關閉并行 | 開啟并行 | 并行度 | 開啟并行性能提升倍數 |
---|---|---|---|---|---|
異構資料1億,日志資料10億 | 5.14 秒 | 0.29 秒 | 24 | 17.7 倍 |
測試用例
《PostgreSQL 遞歸應用實踐 - 非“傳銷”的高并發實時藤、樹狀傭金配置設定體系》統計樹中每個ID在日志表中的聚合。
1、樹狀表結構設計
create unlogged table tbl (
uid int8 primary key, -- 使用者ID
pid int8 -- 直接上遊ID,如果一個使用者是ROOT使用者,則PID為 null
);
create index idx_tbl_1 on tbl (pid);
2、建立一個函數,按規則傳回它的上遊
create or replace function gen_pid(int8) returns int8 as $$
-- 生成它的上遊ID,200萬以内的ID為根ID。其他都取比它小200萬對應的那個ID,形成一顆50級的樹。
select case when $1<=2000000 then null else $1-2000000 end;
$$ language sql strict;
3、樹狀資料,寫入1億資料,形成深度為50的樹。
insert into tbl select id, gen_pid(id) from generate_series(1,100000000) t(id) on conflict do nothing;
4、行為資料10億
create unlogged table log (uid int, info text, crt_time timestamp);
insert into log select random()*10+1 , '', now() from generate_series(1,1000000000);
create index idx_log_1 on log(uid);
alter table tbl set (parallel_workers =24);
alter table log set (parallel_workers =24);
vacuum analyze tbl;
vacuum analyze log;
set min_parallel_index_scan_size =0;
set max_parallel_workers=64;
set max_parallel_workers_per_gather =24;
set parallel_setup_cost =0;
set parallel_tuple_cost =0;
set work_mem ='1GB';
set parallel_leader_participation=off;
擷取樹狀值對應行為資料的統計資訊。
https://github.com/digoal/blog/blob/master/201903/20190318_04.md#1%E5%85%B3%E9%97%AD%E5%B9%B6%E8%A1%8C%E8%80%97%E6%97%B6-514-%E7%A7%92 1、關閉并行,耗時: 5.14 秒。
postgres=# explain with recursive tmp as (
select uid,pid from tbl where pid =1
union all
select tbl.uid,tbl.pid from tbl join tmp on (tmp.uid=tbl.pid) where tbl.* is not null
) ,
b as
(select uid, count(*) cnt from log where uid = any
(array(
select pid from tmp
))
group by uid)
select tmp.*, case when b.cnt is not null then b.cnt else 0 end as cnt from tmp left join b on (tmp.pid=b.uid);
QUERY PLAN
-------------------------------------------------------------------------------------------------------
Hash Left Join (cost=15566024.20..15566026.71 rows=101 width=24)
Hash Cond: (tmp.pid = b.uid)
CTE tmp
-> Recursive Union (cost=0.57..286.31 rows=101 width=16)
-> Index Scan using idx_tbl_1 on tbl (cost=0.57..2.79 rows=1 width=16)
Index Cond: (pid = 1)
-> Nested Loop (cost=0.57..28.15 rows=10 width=16)
-> WorkTable Scan on tmp tmp_1 (cost=0.00..0.20 rows=10 width=8)
-> Index Scan using idx_tbl_1 on tbl tbl_1 (cost=0.57..2.79 rows=1 width=16)
Index Cond: (pid = tmp_1.uid)
Filter: (tbl_1.* IS NOT NULL)
CTE b
-> GroupAggregate (cost=2.59..15565737.54 rows=11 width=12)
Group Key: log.uid
InitPlan 2 (returns $3)
-> CTE Scan on tmp tmp_2 (cost=0.00..2.02 rows=101 width=8)
-> Index Only Scan using idx_log_1 on log (cost=0.57..12493451.46 rows=614456789 width=4)
Index Cond: (uid = ANY ($3))
-> CTE Scan on tmp (cost=0.00..2.02 rows=101 width=16)
-> Hash (cost=0.22..0.22 rows=11 width=12)
-> CTE Scan on b (cost=0.00..0.22 rows=11 width=12)
(21 rows)
Time: 0.803 ms
postgres=# with recursive tmp as (
select uid,pid from tbl where pid =1
union all
select tbl.uid,tbl.pid from tbl join tmp on (tmp.uid=tbl.pid) where tbl.* is not null
) ,
b as
(select uid, count(*) cnt from log where uid = any
(array(
select pid from tmp
))
group by uid)
select tmp.*, case when b.cnt is not null then b.cnt else 0 end as cnt from tmp left join b on (tmp.pid=b.uid);
uid | pid | cnt
----------+----------+----------
2000001 | 1 | 50004739
4000001 | 2000001 | 0
6000001 | 4000001 | 0
8000001 | 6000001 | 0
10000001 | 8000001 | 0
12000001 | 10000001 | 0
14000001 | 12000001 | 0
16000001 | 14000001 | 0
18000001 | 16000001 | 0
20000001 | 18000001 | 0
22000001 | 20000001 | 0
24000001 | 22000001 | 0
26000001 | 24000001 | 0
28000001 | 26000001 | 0
30000001 | 28000001 | 0
32000001 | 30000001 | 0
34000001 | 32000001 | 0
36000001 | 34000001 | 0
38000001 | 36000001 | 0
40000001 | 38000001 | 0
42000001 | 40000001 | 0
44000001 | 42000001 | 0
46000001 | 44000001 | 0
48000001 | 46000001 | 0
50000001 | 48000001 | 0
52000001 | 50000001 | 0
54000001 | 52000001 | 0
56000001 | 54000001 | 0
58000001 | 56000001 | 0
60000001 | 58000001 | 0
62000001 | 60000001 | 0
64000001 | 62000001 | 0
66000001 | 64000001 | 0
68000001 | 66000001 | 0
70000001 | 68000001 | 0
72000001 | 70000001 | 0
74000001 | 72000001 | 0
76000001 | 74000001 | 0
78000001 | 76000001 | 0
80000001 | 78000001 | 0
82000001 | 80000001 | 0
84000001 | 82000001 | 0
86000001 | 84000001 | 0
88000001 | 86000001 | 0
90000001 | 88000001 | 0
92000001 | 90000001 | 0
94000001 | 92000001 | 0
96000001 | 94000001 | 0
98000001 | 96000001 | 0
(49 rows)
Time: 5142.932 ms (00:05.143)
https://github.com/digoal/blog/blob/master/201903/20190318_04.md#2%E5%BC%80%E5%90%AF%E5%B9%B6%E8%A1%8C%E8%80%97%E6%97%B6-029-%E7%A7%92 2、開啟并行,耗時: 0.29 秒。
postgres=# explain with recursive tmp as (
select uid,pid from tbl where pid =1
union all
select tbl.uid,tbl.pid from tbl join tmp on (tmp.uid=tbl.pid) where tbl.* is not null
) ,
b as
(select uid, count(*) cnt from log where uid = any
(array(
select pid from tmp
))
group by uid)
select tmp.*, case when b.cnt is not null then b.cnt else 0 end as cnt from tmp left join b on (tmp.pid=b.uid);
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Hash Left Join (cost=6733216.66..6733219.17 rows=101 width=24)
Hash Cond: (tmp.pid = b.uid)
CTE tmp
-> Recursive Union (cost=0.57..286.31 rows=101 width=16)
-> Index Scan using idx_tbl_1 on tbl (cost=0.57..2.79 rows=1 width=16)
Index Cond: (pid = 1)
-> Nested Loop (cost=0.57..28.15 rows=10 width=16)
-> WorkTable Scan on tmp tmp_1 (cost=0.00..0.20 rows=10 width=8)
-> Index Scan using idx_tbl_1 on tbl tbl_1 (cost=0.57..2.79 rows=1 width=16)
Index Cond: (pid = tmp_1.uid)
Filter: (tbl_1.* IS NOT NULL)
CTE b
-> Finalize GroupAggregate (cost=3.18..6732930.00 rows=11 width=12)
Group Key: log.uid
InitPlan 2 (returns $3)
-> CTE Scan on tmp tmp_2 (cost=0.00..2.02 rows=101 width=8)
-> Gather Merge (cost=1.16..6732926.55 rows=264 width=12)
Workers Planned: 24
Params Evaluated: $3
-> Partial GroupAggregate (cost=0.57..6732919.18 rows=11 width=12)
Group Key: log.uid
-> Parallel Index Only Scan using idx_log_1 on log (cost=0.57..6604907.24 rows=25602366 width=4)
Index Cond: (uid = ANY ($3))
-> CTE Scan on tmp (cost=0.00..2.02 rows=101 width=16)
-> Hash (cost=0.22..0.22 rows=11 width=12)
-> CTE Scan on b (cost=0.00..0.22 rows=11 width=12)
(26 rows)
Time: 0.793 ms
postgres=# with recursive tmp as (
select uid,pid from tbl where pid =1
union all
select tbl.uid,tbl.pid from tbl join tmp on (tmp.uid=tbl.pid) where tbl.* is not null
) ,
b as
(select uid, count(*) cnt from log where uid = any
(array(
select pid from tmp
))
group by uid)
select tmp.*, case when b.cnt is not null then b.cnt else 0 end as cnt from tmp left join b on (tmp.pid=b.uid);
uid | pid | cnt
----------+----------+----------
2000001 | 1 | 50004739
4000001 | 2000001 | 0
6000001 | 4000001 | 0
8000001 | 6000001 | 0
10000001 | 8000001 | 0
12000001 | 10000001 | 0
14000001 | 12000001 | 0
16000001 | 14000001 | 0
18000001 | 16000001 | 0
20000001 | 18000001 | 0
22000001 | 20000001 | 0
24000001 | 22000001 | 0
26000001 | 24000001 | 0
28000001 | 26000001 | 0
30000001 | 28000001 | 0
32000001 | 30000001 | 0
34000001 | 32000001 | 0
36000001 | 34000001 | 0
38000001 | 36000001 | 0
40000001 | 38000001 | 0
42000001 | 40000001 | 0
44000001 | 42000001 | 0
46000001 | 44000001 | 0
48000001 | 46000001 | 0
50000001 | 48000001 | 0
52000001 | 50000001 | 0
54000001 | 52000001 | 0
56000001 | 54000001 | 0
58000001 | 56000001 | 0
60000001 | 58000001 | 0
62000001 | 60000001 | 0
64000001 | 62000001 | 0
66000001 | 64000001 | 0
68000001 | 66000001 | 0
70000001 | 68000001 | 0
72000001 | 70000001 | 0
74000001 | 72000001 | 0
76000001 | 74000001 | 0
78000001 | 76000001 | 0
80000001 | 78000001 | 0
82000001 | 80000001 | 0
84000001 | 82000001 | 0
86000001 | 84000001 | 0
88000001 | 86000001 | 0
90000001 | 88000001 | 0
92000001 | 90000001 | 0
94000001 | 92000001 | 0
96000001 | 94000001 | 0
98000001 | 96000001 | 0
(49 rows)
Time: 289.225 ms
注意不要使用以下寫法,性能不太好:
非并行, 4秒
with recursive tmp as (
select uid,pid from tbl where pid =1
union all
select tbl.uid,tbl.pid from tbl join tmp on (tmp.uid=tbl.pid) where tbl.* is not null
)
select *,(select count(*) from log where uid=tmp.pid) from tmp;
非多階段并行聚合, 慢
with recursive tmp as (
select uid,pid from tbl where pid =1
union all
select tbl.uid,tbl.pid from tbl join tmp on (tmp.uid=tbl.pid) where tbl.* is not null
)
select tmp.pid,count(*) from tmp left join log on (tmp.pid=log.uid) group by tmp.pid;
https://github.com/digoal/blog/blob/master/201903/20190318_04.md#%E5%85%B6%E4%BB%96%E7%9F%A5%E8%AF%86 其他知識
2、function, op 識别是否支援parallel
postgres=# select proparallel,proname from pg_proc;
proparallel | proname
-------------+----------------------------------------------
s | boolin
s | boolout
s | byteain
s | byteaout
3、subquery mapreduce unlogged table
對于一些情況,如果期望簡化優化器對非常非常複雜的SQL并行優化的負擔,可以自己将SQL拆成幾段,中間結果使用unlogged table儲存,類似mapreduce的思想。unlogged table同樣支援parallel 計算。
4、vacuum,垃圾回收并行。
5、dblink 異步調用并行
《PostgreSQL VOPS 向量計算 + DBLINK異步并行 - 單執行個體 10億 聚合計算跑進2秒》 《PostgreSQL 相似搜尋分布式架構設計與實踐 - dblink異步調用與多機并行(遠端 遊标+記錄 UDF執行個體)》 《PostgreSQL dblink異步調用實作 并行hash分片JOIN - 含資料交、并、差 提速案例 - 含dblink VS pg 11 parallel hash join VS pg 11 智能分區JOIN》暫時不允許并行的場景(将來PG會繼續擴大支援範圍):
1、修改行,鎖行,除了create table as , select into, create mview這幾個可以使用并行。
2、query 會被中斷時,例如cursor , loop in PL/SQL ,因為涉及到中間處理,是以不建議開啟并行。
3、paralle unsafe udf ,這種UDF不會并行
4、嵌套并行(udf (内部query并行)),外部調用這個UDF的SQL不會并行。(主要是防止large parallel workers )
5、SSI 隔離級别
https://github.com/digoal/blog/blob/master/201903/20190318_04.md#%E5%8F%82%E8%80%83 參考
https://www.postgresql.org/docs/11/parallel-plans.html 《PostgreSQL 11 preview - 并行計算 增強 彙總》 《PostgreSQL 10 自定義并行計算聚合函數的原理與實踐 - (含array_agg合并多個數組為單個一進制數組的例子)》https://github.com/digoal/blog/blob/master/201903/20190318_04.md#%E5%85%8D%E8%B4%B9%E9%A2%86%E5%8F%96%E9%98%BF%E9%87%8C%E4%BA%91rds-postgresql%E5%AE%9E%E4%BE%8Becs%E8%99%9A%E6%8B%9F%E6%9C%BA 免費領取阿裡雲RDS PostgreSQL執行個體、ECS虛拟機
