在之前的章節中,讨論過了通過 分區+并行等方式來進行超大的表的切分,通過這種方式能夠極大的提高資料的平均分布,但是不是最完美的。
比如在資料量再提高幾個層次,我們假設這個表目前有1T的大小。有10個分區,最大的分區有400G,那麼如果我們想盡可能的平均的導出資料,使用并行就不一定能夠那麼奏效了。
比方說我們要求每個dump檔案控制在200M總有,那樣的話400G的分區就需要800個并行才能完成,在實際的資料庫維護中,我們知道預設的并行數隻有64個,提高幾倍,也不可能超過800
是以在資料量極大的情況下,如果資源緊張,可能生成的dump就會比較大。
我們考慮使用rowid來滿足我們的需求。
我們可以根據需要來指定需要生成幾個dump檔案。比如表subscriber有600M,那麼如果按照200M為一個機關,我們需要生成3個dump檔案。
如果想資料足夠平均,就需要在rowid上做點功夫。
我們先設定一個參數檔案,如下的格式。
可以看到表memo資料量極大,按照200M一個機關,最大的分區(P9_A3000_E5)需要800個并行。
表ICE_AGREEMENT比較小,不是分區表,我們以x來臨時作為分區表的代名,在處理的時候可以友善的甄别
MEMO P9_A3000_E0 156
MEMO P9_A3000_E1 170
MEMO P9_A3000_E2 190
MEMO P9_A3000_E3 200
MEMO P9_A3000_E4 180
MEMO P9_A3000_E5 800
MEMO PMAXVALUE_AMAXVALUE_EMAXVALUE 1
ICE_AGREEMENT x 36
CRIBER_HISTORY x 11
可以使用如下的腳本來完成rowid的切分。
#### $1 dba conn details
#### $2 table owner
#### $3 table_name
#### $4 subobject_name
#### $5 parallel_no
function normal_split
{
sqlplus -s $1
set linesize 200
set pages 0
set feedback off
spool list/rowid_range_$3_x.lst
select rownum || ', ' ||' rowid between '||
chr(39)||dbms_rowid.rowid_create( 1, DOI, lo_fno, lo_block, 0 ) ||chr(39)|| ' and ' ||
chr(39)||dbms_rowid.rowid_create( 1, DOI, hi_fno, hi_block, 1000000 )||chr(39) data
from (
SELECT DISTINCT DOI, grp,
first_value(relative_fno) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) lo_fno,
first_value(block_id ) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) lo_block,
last_value(relative_fno) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) hi_fno,
last_value(block_id+blocks-1) over (partition BY DOI,grp order by relative_fno, block_id rows BETWEEN unbounded preceding AND unbounded following) hi_block,
SUM(blocks) over (partition BY DOI,grp) sum_blocks,SUBOBJECT_NAME
FROM(
SELECT obj.OBJECT_ID,
obj.SUBOBJECT_NAME,
obj.DATA_OBJECT_ID as DOI,
ext.relative_fno,
ext.block_id,
( SUM(blocks) over () ) SUM,
(SUM(blocks) over (ORDER BY DATA_OBJECT_ID,relative_fno, block_id)-0.01 ) sum_fno ,
TRUNC( (SUM(blocks) over (ORDER BY DATA_OBJECT_ID,relative_fno, block_id)-0.01) / (SUM(blocks) over ()/ $5 ) ) grp,
ext.blocks
FROM dba_extents ext, dba_objects obj
WHERE ext.segment_name = UPPER('$3')
AND ext.owner = UPPER('$2')
AND obj.owner = ext.owner
AND obj.object_name = ext.segment_name
AND obj.DATA_OBJECT_ID IS NOT NULL
ORDER BY DATA_OBJECT_ID, relative_fno, block_id
) order by DOI,grp
);
spool off;
EOF
}
function partition_split
spool list/rowid_range_$3_$4.lst
AND obj.subobject_name=UPPER('$4')
spool off
sub_partition_name=$4
if [[ $sub_partition_name = 'x' ]]
then
normal_split $1 $2 $3 x $5
else
partition_split $1 $2 $3 $4 $5
fi
腳本比較長,需要的參數有5個,因為通路dba_extents,dba_objects需要一定的權限,可以使用dba權限的賬号即可。
第2個參數是表的owner,第3個參數是表名,第4個參數是分區表名(如果是分區表就是分區表名,如果不是就填x),第5個參數就是期望使用的并行度,能夠在一定程度上加快速度
簡單示範一下,可以通過下面的方式來運作腳本,我們指定生成10個dump這個表不是分區表。
ksh gen_rowid.sh n1/n1 prdowner subscriber_history x 10
1, where rowid between 'AAB4VPAAJAAD7qAAAA' and 'AAB4VPAAJAAD/R/EJA'
2, where rowid between 'AAB4VPAAJAAD/SAAAA' and 'AAB4VPAAKAABV5/EJA'
3, where rowid between 'AAB4VPAAKAABV6AAAA' and 'AAB4VPAALAAE/p/EJA'
4, where rowid between 'AAB4VPAALAAE/qAAAA' and 'AAB4VPAAMAAFFh/EJA'
5, where rowid between 'AAB4VPAAMAAFFiAAAA' and 'AAB4VPAAyAACuh/EJA'
6, where rowid between 'AAB4VPAAyAACuiAAAA' and 'AAB4VPAAzAACe5/EJA'
7, where rowid between 'AAB4VPAAzAACe6AAAA' and 'AAB4VPAA1AACZR/EJA'
8, where rowid between 'AAB4VPAA1AACZSAAAA' and 'AAB4VPAA2AACWR/EJA'
9, where rowid between 'AAB4VPAA2AACWSAAAA' and 'AAB4VPAA4AACP5/EJA'
10, where rowid between 'AAB4VPAA4AACQCAAAA' and 'AAB4VPAA5AACHx/EJA'
然後我們來看看資料是否足夠平均。
可以類似下面的方式驗證,我們抽第1,2,10個。
SQL> select count(*)from subscriber_history where rowid between 'AAB4VPAAJAAD7qAAAA' and 'AAB4VPAAJAAD/R/EJA'
2 ;
COUNT(*)
----------
328759
SQL> select count(*)from subscriber_history where rowid between 'AAB4VPAAJAAD/SAAAA' and 'AAB4VPAAKAABV5/EJA'
2 /
318021
SQL> select count(*)from subscriber_history where rowid between 'AAB4VPAA4AACQCAAAA' and 'AAB4VPAA5AACHx/EJA';
332638
可以看到資料還是很平均的,達到了我們的期望。