天天看點

PostgreSQL native partition 分區表性能優化之 - 動态SQL+服務端綁定變量

标簽

PostgreSQL , 分區表 , native partition , 性能 , pg_pathman , plpgsql , 動态SQL , 服務端綁定變量 , prepare , execute

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E8%83%8C%E6%99%AF 背景

目前PG的native partition分區性能不佳,一種解決方法是使用pg_pathman插件,另一種方法是業務上直接插分區,還有一種方法是使用UDF函數接口(函數内部使用prepared statement來降低PARSE CPU開銷)。

本文提供的是UDF的例子,以及性能比對。

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E4%BE%8B%E5%AD%90 例子

1、建立分區表

create table p (id int , info text, crt_time timestamp) partition by list (abs(mod(id,128)));    
           

2、建立128個分區

do language plpgsql $$  
declare  
begin  
  for i in 0..127 loop  
    execute format('create table p%s partition of p for values in (%s)', i, i);  
  end loop;  
end;  
$$;  
           

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E7%9B%B4%E6%8E%A5%E6%8F%92%E5%88%86%E5%8C%BA%E4%B8%BB%E8%A1%A8 直接插分區主表

vi test.sql  
\set id random(1,2000000000)  
insert into p values (:id, 'test', now());  
           

性能

pgbench -M prepared -n -r -P 1 -f ./test.sql -c 64 -j 64 -T 120  
  
  
  
pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
progress: 1.0 s, 26287.2 tps, lat 1.178 ms stddev 0.418  
progress: 2.0 s, 27441.8 tps, lat 1.166 ms stddev 0.393  
progress: 3.0 s, 27526.0 tps, lat 1.163 ms stddev 0.398  
           

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E6%89%B9%E9%87%8F%E6%8F%92%E6%80%A7%E8%83%BD 批量插性能

vi test.sql  
  
insert into p values (1,'test',now()),(2,'test',now()),(3,'test',now()),(4,'test',now()),(5,'test',now()),(6,'test',now()),(7,'test',now()),(8,'test',now()),(9,'test',now()),(10,'test',now());  
           
pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
progress: 1.0 s, 26240.5 tps, lat 1.179 ms stddev 0.462  
progress: 2.0 s, 28285.8 tps, lat 1.131 ms stddev 0.393  
progress: 3.0 s, 28185.1 tps, lat 1.135 ms stddev 0.423  
progress: 4.0 s, 28266.1 tps, lat 1.132 ms stddev 0.395  
progress: 5.0 s, 28248.9 tps, lat 1.133 ms stddev 0.438  
progress: 6.0 s, 26739.0 tps, lat 1.197 ms stddev 1.154  
progress: 7.0 s, 28075.1 tps, lat 1.140 ms stddev 0.426  
progress: 8.0 s, 28297.8 tps, lat 1.131 ms stddev 0.384  
           

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E4%BD%BF%E7%94%A8udf%E7%BB%91%E5%AE%9A%E5%8F%98%E9%87%8F%E6%8F%92%E5%88%86%E5%8C%BA 使用UDF+綁定變量插分區

1、綁定變量的文法

postgres=# \h prepare  
Command:     PREPARE  
Description: prepare a statement for execution  
Syntax:  
PREPARE name [ ( data_type [, ...] ) ] AS statement  
  
postgres=# \h execute  
Command:     EXECUTE  
Description: execute a prepared statement  
Syntax:  
EXECUTE name [ ( parameter [, ...] ) ]  
           

2、寫一個UDF,使用綁定變量插入

create or replace function ins_p(int, text, timestamp) returns void as $$  
declare  
  suffix text := abs(mod($1,128));  
begin  
  execute format('execute ps%s(%s, %L, %L)', suffix, $1, $2, $3);  
  exception when others then  
    execute format('prepare ps%s(int,text,timestamp) as insert into p%s (id,info,crt_time) values ($1,$2,$3)', suffix, suffix);  
    execute format('execute ps%s(%s, %L, %L)', suffix, $1, $2, $3);  
end;  
$$ language plpgsql strict;  
           

3、性能

vi test.sql  
  
\set id random(1,2000000000)  
select ins_p(:id, 'test', now()::timestamp);  
           
pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
progress: 1.0 s, 192814.1 tps, lat 0.161 ms stddev 0.092  
progress: 2.0 s, 205480.6 tps, lat 0.156 ms stddev 0.061  
progress: 3.0 s, 209206.4 tps, lat 0.153 ms stddev 0.058  
progress: 4.0 s, 206333.8 tps, lat 0.155 ms stddev 0.061  
           

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E5%A6%82%E6%9E%9C%E6%98%AFbatch%E5%86%99%E5%85%A5%E5%8F%AF%E4%BB%A5%E6%94%B9%E4%B8%80%E4%B8%8B%E8%BF%99%E4%B8%AAudf%E5%A6%82%E4%B8%8B 如果是BATCH寫入,可以改一下這個UDF如下

create or replace function ins_p(int, text, timestamp) returns void as $$  
declare  
  suffix text := abs(mod($1,128));  
begin  
  execute format('execute ps%s(%s, %L, %L)', suffix, $1, $2, $3);  
  exception when others then  
    execute format('prepare ps%s(int,text,timestamp) as insert into p%s (id,info,crt_time) values ($1,$2,$3)', suffix, suffix);  
    execute format('execute ps%s(%s, %L, %L)', suffix, $1, $2, $3);  
end;  
$$ language plpgsql strict;  
           
create or replace function ins_p_batch(p[]) returns void as $$  
declare  
  i p;  
begin  
  foreach i in array $1 loop  
    perform ins_p(i.id, i.info, i.crt_time);  
  end loop;  
end;  
$$ language plpgsql strict;  
           

batch使用舉例

postgres=# select count(*) from p;  
  count     
----------  
 28741670  
(1 row)  
  
Time: 390.775 ms  
postgres=# select ins_p_batch((select array_agg(p) from (select p from p limit 10000) t));  
 ins_p_batch   
-------------  
   
(1 row)  
  
Time: 247.861 ms  
postgres=# select count(*) from p;  
  count     
----------  
 28751670  
(1 row)  
  
Time: 383.485 ms  
           
postgres=# select array_agg(p) from (select p from p limit 10) t;  
-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
array_agg | {"(1269675648,test,\"2019-01-09 17:08:35.432933\")","(1515917568,test,\"2019-01-09 17:08:35.435001\")","(137413760,test,\"2019-01-09 17:08:35.438484\")","(1750920192,test,\"2019-01-09 17:08:35.443544\")","(849316096,test,\"2019-01-09 17:08:35.448552\")","(891638016,test,\"2019-01-09 17:08:35.449074\")","(320902144,test,\"2019-01-09 17:08:35.449142\")","(95829120,test,\"2019-01-09 17:08:35.453658\")","(358048256,test,\"2019-01-09 17:08:35.454924\")","(1009512320,test,\"2019-01-09 17:08:35.457164\")"}  
  
Time: 1.771 ms  
  
postgres=# select ins_p_batch('{"(1269675648,test,\"2019-01-09 17:08:35.432933\")","(1515917568,test,\"2019-01-09 17:08:35.435001\")","(137413760,test,\"2019-01-09 17:08:35.438484\")","(1750920192,test,\"2019-01-09 17:08:35.443544\")","(849316096,test,\"2019-01-09 17:08:35.448552\")","(891638016,test,\"2019-01-09 17:08:35.449074\")","(320902144,test,\"2019-01-09 17:08:35.449142\")","(95829120,test,\"2019-01-09 17:08:35.453658\")","(358048256,test,\"2019-01-09 17:08:35.454924\")","(1009512320,test,\"2019-01-09 17:08:35.457164\")"}');  
 ins_p_batch   
-------------  
   
(1 row)  
  
Time: 0.841 ms  
           
vi test.sql  
select ins_p_batch('{"(1269675648,test,\"2019-01-09\")","(1515917568,test,\"2019-01-09\")","(137413760,test,\"2019-01-09\")","(1750920192,test,\"2019-01-09\")","(849316096,test,\"2019-01-09\")","(891638016,test,\"2019-01-09\")","(320902144,test,\"2019-01-09\")","(95829120,test,\"2019-01-09\")","(358048256,test,\"2019-01-09\")","(1009512320,test,\"2019-01-09\")"}');  
           

一次插10行

pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
progress: 1.0 s, 41637.4 tps, lat 0.745 ms stddev 0.742  
progress: 2.0 s, 42862.5 tps, lat 0.746 ms stddev 0.614  
progress: 3.0 s, 42417.1 tps, lat 0.754 ms stddev 0.689  
progress: 4.0 s, 42389.5 tps, lat 0.755 ms stddev 0.691  
           

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E5%BA%94%E7%94%A8%E7%A8%8B%E5%BA%8F%E7%9B%B4%E6%8E%A5%E5%86%99%E5%88%86%E5%8C%BA 應用程式直接寫分區

vi test.sql  
\set id random(1,2000000000)  
insert into p2 values (2,'test',now());  
           
pgbench -M prepared -n -r -P 1 -f ./test.sql -c 32 -j 32 -T 120  
progress: 1.0 s, 364350.5 tps, lat 0.085 ms stddev 0.208  
progress: 2.0 s, 379071.4 tps, lat 0.084 ms stddev 0.215  
progress: 3.0 s, 384452.1 tps, lat 0.083 ms stddev 0.188  
           

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E6%80%A7%E8%83%BD%E5%AF%B9%E6%AF%94 性能對比

方法 每秒插入多少行
插分區主表(單條) 2.7萬
插分區主表(10條) 28萬
應用直接插分區(單條) 38萬
使用UDF+動态綁定變量插分區(單條) 20萬
使用UDF+動态綁定變量批量查(10條) 42萬

另外需要注意,并發越高,直接插主表的性能越差,例如使用64個并發插入時,隻有2.1萬行/s。

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E5%8F%82%E8%80%83 參考

《PostgreSQL 9.x, 10, 11 hash分區表 用法舉例》 《分區表鎖粒度差異 - pg_pathman VS native partition table》 《PostgreSQL 商用版本EPAS(阿裡雲ppas(Oracle 相容版)) - 分區表性能優化 (堪比pg_pathman)》 《PostgreSQL 10 内置分區 vs pg_pathman perf profiling》 《PostgreSQL 9.6 sharding based on FDW & pg_pathman》 《PostgreSQL 9.5+ 高效分區表實作 - pg_pathman》 《PostgreSQL 查詢涉及分區表過多導緻的性能問題 - 性能診斷與優化(大量BIND, spin lock, SLEEP程序)》

https://github.com/digoal/blog/blob/master/201901/20190109_01.md#%E5%85%8D%E8%B4%B9%E9%A2%86%E5%8F%96%E9%98%BF%E9%87%8C%E4%BA%91rds-postgresql%E5%AE%9E%E4%BE%8Becs%E8%99%9A%E6%8B%9F%E6%9C%BA 免費領取阿裡雲RDS PostgreSQL執行個體、ECS虛拟機

PostgreSQL native partition 分區表性能優化之 - 動态SQL+服務端綁定變量