天天看點

一入職!就遇到上億(MySQL)大表的優化.... 得到滿足時間條件的最大主鍵ID通過按照主鍵的順序去 順序掃描小批量删除資料先執行一次以下語句執行小批量delete後會傳回row_count(), 删除的行數程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成程式睡眠0.5s

雲栖号資訊:【 點選檢視更多行業資訊

在這裡您可以找到不同行業的第一手的上雲資訊,還在等什麼,快來!

jia-xin背景

XX執行個體(一主一從)xxx告警中每天淩晨在報SLA報警,該報警的意思是存在一定的主從延遲(若在此時發生主從切換,需要長時間才可以完成切換,要追延遲來保證主從資料的一緻性)

XX執行個體的慢查詢數量最多(執行時間超過1s的sql會被記錄),XX應用那方每天晚上在做删除一個月前資料的任務

分析

使用pt-query-digest工具分析最近一周的mysql-slow.log

一入職!就遇到上億(MySQL)大表的優化.... 得到滿足時間條件的最大主鍵ID通過按照主鍵的順序去 順序掃描小批量删除資料先執行一次以下語句執行小批量delete後會傳回row_count(), 删除的行數程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成程式睡眠0.5s

select arrival_record操作記錄的慢查詢數量最多有4萬多次,平均響應時間為4s,delete arrival_record記錄了6次,平均響應時間258s。

select xxx_record語句

select arrival_record 慢查詢語句都類似于如下所示,where語句中的參數字段是一樣的,傳入的參數值不一樣

select count(*) from arrival_record where product_id=26 and receive_time between '2019-03-25 14:00:00' and '2019-03-25 15:00:00' and receive_spend_ms>=0G

一入職!就遇到上億(MySQL)大表的優化.... 得到滿足時間條件的最大主鍵ID通過按照主鍵的順序去 順序掃描小批量删除資料先執行一次以下語句執行小批量delete後會傳回row_count(), 删除的行數程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成程式睡眠0.5s

檢視執行計劃

explain select count(*) from arrival_record where product_id=26 and receive_time between '2019-03-25 14:00:00' and '2019-03-25 15:00:00' and receive_spend_ms>=0\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: arrival_record
partitions: NULL
type: ref
possible_keys: IXFK_arrival_record
key: IXFK_arrival_record
key_len: 8
ref: const
rows: 32261320
filtered: 3.70
Extra: Using index condition; Using where
1 row in set, 1 warning (0.00 sec)           

用到了索引IXFK_arrival_record,但預計掃描的行數很多有3000多w行

show index from arrival_record;
+----------------+------------+---------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+----------------+------------+---------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| arrival_record | 0 | PRIMARY | 1 | id | A | 107990720 | NULL | NULL | | BTREE | | |
| arrival_record | 1 | IXFK_arrival_record | 1 | product_id | A | 1344 | NULL | NULL | | BTREE | | |
| arrival_record | 1 | IXFK_arrival_record | 2 | station_no | A | 22161 | NULL | NULL | YES | BTREE | | |
| arrival_record | 1 | IXFK_arrival_record | 3 | sequence | A | 77233384 | NULL | NULL | | BTREE | | |
| arrival_record | 1 | IXFK_arrival_record | 4 | receive_time | A | 65854652 | NULL | NULL | YES | BTREE | | |
| arrival_record | 1 | IXFK_arrival_record | 5 | arrival_time | A | 73861904 | NULL | NULL | YES | BTREE | | |
+----------------+------------+---------------------+--------------+--------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
show create table arrival_record;
..........
arrival_spend_ms bigint(20) DEFAULT NULL,
total_spend_ms bigint(20) DEFAULT NULL,
PRIMARY KEY (id),
KEY IXFK_arrival_record (product_id,station_no,sequence,receive_time,arrival_time) USING BTREE,
CONSTRAINT FK_arrival_record_product FOREIGN KEY (product_id) REFERENCES product (id) ON DELETE NO ACTION ON UPDATE NO ACTION           

該表總記錄數約1億多條,表上隻有一個複合索引,product_id字段基數很小,選擇性不好

傳入的過濾條件 where product_id=26 and receive_time between '2019-03-25 14:00:00' and '2019-03-25 15:00:00' and receive_spend_ms>=0 沒有station_nu字段,使用不到複合索引 IXFK_arrival_record的 product_id,station_no,sequence,receive_time 這幾個字段

根據最左字首原則,select arrival_record隻用到了複合索引IXFK_arrival_record的第一個字段product_id,而該字段選擇性很差,導緻掃描的行數很多,執行時間長

receive_time字段的基數大,選擇性好,可對該字段單獨建立索引,select arrival_record sql就會使用到該索引

現在已經知道了在慢查詢中記錄的select arrival_record where語句傳入的參數字段有 product_id,receive_time,receive_spend_ms,還想知道對該表的通路有沒有通過其它字段來過濾了?

神器tcpdump出場的時候到了

使用tcpdump抓包一段時間對該表的select語句

tcpdump -i bond0 -s 0 -l -w - dst port 3316 | strings | grep select | egrep -i 'arrival_record' >/tmp/select_arri.log           

擷取select 語句中from 後面的where條件語句

IFS_OLD=$IFS
IFS=$'\n'
for i in `cat /tmp/select_arri.log `;do echo ${i#*'from'}; done | less
IFS=$IFS_OLD           
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=17 and arrivalrec0_.station_no='56742'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=22 and arrivalrec0_.station_no='S7100'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=24 and arrivalrec0_.station_no='V4631'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=22 and arrivalrec0_.station_no='S9466'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=24 and arrivalrec0_.station_no='V4205'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=24 and arrivalrec0_.station_no='V4105'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=24 and arrivalrec0_.station_no='V4506'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=24 and arrivalrec0_.station_no='V4617'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=22 and arrivalrec0_.station_no='S8356'
arrival_record arrivalrec0_ where arrivalrec0_.sequence='2019-03-27 08:40' and arrivalrec0_.product_id=22 and arrivalrec0_.station_no='S8356'           

select 該表 where條件中有product_id,station_no,sequence字段,可以使用到複合索引IXFK_arrival_record的前三個字段

綜上所示,優化方法為,删除複合索引IXFK_arrival_record,建立複合索引idx_sequence_station_no_product_id,并建立單獨索引indx_receive_time

delete xxx_record語句

一入職!就遇到上億(MySQL)大表的優化.... 得到滿足時間條件的最大主鍵ID通過按照主鍵的順序去 順序掃描小批量删除資料先執行一次以下語句執行小批量delete後會傳回row_count(), 删除的行數程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成程式睡眠0.5s

測試

拷貝arrival_record表到測試執行個體上進行删除重新索引操作

XX執行個體arrival_record表資訊

du -sh /datas/mysql/data/3316/cq_new_cimiss/arrival_record*

12K /datas/mysql/data/3316/cq_new_cimiss/arrival_record.frm

48G /datas/mysql/data/3316/cq_new_cimiss/arrival_record.ibd

select count() from cq_new_cimiss.arrival_record;
+-----------+
| count() |
+-----------+
| 112294946 |
+-----------+
1億多記錄數

SELECT
table_name,
CONCAT(FORMAT(SUM(data_length) / 1024 / 1024,2),'M') AS dbdata_size,
CONCAT(FORMAT(SUM(index_length) / 1024 / 1024,2),'M') AS dbindex_size,
CONCAT(FORMAT(SUM(data_length + index_length) / 1024 / 1024 / 1024,2),'G') AS table_size(G),
AVG_ROW_LENGTH,table_rows,update_time
FROM
information_schema.tables
WHERE table_schema = 'cq_new_cimiss' and table_name='arrival_record';
+----------------+-------------+--------------+------------+----------------+------------+---------------------+
| table_name | dbdata_size | dbindex_size | table_size(G) | AVG_ROW_LENGTH | table_rows | update_time |
+----------------+-------------+--------------+------------+----------------+------------+---------------------+
| arrival_record | 18,268.02M | 13,868.05M | 31.38G | 175 | 109155053 | 2019-03-26 12:40:17 |
+----------------+-------------+--------------+------------+----------------+------------+---------------------+           

磁盤占用空間48G,mysql中該表大小為31G,存在17G左右的碎片,大多由于删除操作造成的(記錄被删除了,空間沒有回收)

備份還原該表到新的執行個體中,删除原來的複合索引,重新添加索引進行測試

mydumper并行壓縮備份

user=root
passwd=xxxx
socket=/datas/mysql/data/3316/mysqld.sock
db=cq_new_cimiss
table_name=arrival_record
backupdir=/datas/dump_$table_name
mkdir -p $backupdir 
  nohup echo `date +%T` && mydumper -u $user -p $passwd -S $socket -B $db -c -T $table_name -o $backupdir -t 32 -r 2000000 && echo `date +%T` &           

并行壓縮備份所花時間(52s)和占用空間(1.2G,實際該表占用磁盤空間為48G,mydumper并行壓縮備份壓縮比相當高!)

Started dump at: 2019-03-26 12:46:04
........

Finished dump at: 2019-03-26 12:46:56

du -sh   /datas/dump_arrival_record/
1.2G  /datas/dump_arrival_record/           

拷貝dump資料到測試節點

scp -rp /datas/dump_arrival_record [email protected]:/datas           

多線程導入資料

time myloader -u root -S /datas/mysql/data/3308/mysqld.sock -P 3308 -p root -B test -d /datas/dump_arrival_recor           

real 126m42.885s

user 1m4.543s

sys 0m4.267s

邏輯導入該表後磁盤占用空間

du -h -d 1 /datas/mysql/data/3308/test/arrival_record.*
12K /datas/mysql/data/3308/test/arrival_record.frm
30G /datas/mysql/data/3308/test/arrival_record.ibd
沒有碎片,和mysql的該表的大小一緻
cp -rp /datas/mysql/data/3308 /datas           

分别使用online DDL和 pt-osc工具來做删除重建索引操作

先删除外鍵,不删除外鍵,無法删除複合索引,外鍵列屬于複合索引中第一列

nohup bash /tmp/ddl_index.sh &
2019-04-04-10:41:39 begin stop mysqld_3308
2019-04-04-10:41:41 begin rm -rf datadir and cp -rp datadir_bak
2019-04-04-10:46:53 start mysqld_3308
2019-04-04-10:46:59 online ddl begin
2019-04-04-11:20:34 onlie ddl stop
2019-04-04-11:20:34 begin stop mysqld_3308
2019-04-04-11:20:36 begin rm -rf datadir and cp -rp datadir_bak
2019-04-04-11:22:48 start mysqld_3308
2019-04-04-11:22:53 pt-osc begin
2019-04-04-12:19:15 pt-osc stop
online ddl 花費時間為34 分鐘,pt-osc花費時間為57 分鐘,使用onlne ddl時間約為pt-osc工具時間的一半           
一入職!就遇到上億(MySQL)大表的優化.... 得到滿足時間條件的最大主鍵ID通過按照主鍵的順序去 順序掃描小批量删除資料先執行一次以下語句執行小批量delete後會傳回row_count(), 删除的行數程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成程式睡眠0.5s

實施

由于是一主一從執行個體,應用是連接配接的vip,删除重建索引采用online ddl來做。停止主從複制後,先在從執行個體上做(不記錄binlog),主從切換,再在新切換的從執行個體上做(不記錄binlog)

function red_echo () {

local what="$*"
    echo -e "$(date +%F-%T)  ${what}"           

}

function check_las_comm(){

if [ "$1" != "0" ];then
    red_echo "$2"
    echo "exit 1"
    exit 1
fi           

red_echo "stop slave"

mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"stop slave"

check_las_comm "$?" "stop slave failed"

red_echo "online ddl begin"

mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"set sql_log_bin=0;select now() as ddl_start;ALTER TABLE $db_.\`${table_name}` DROP FOREIGN KEY FK_arrival_record_product,drop index IXFK_arrival_record,add index idx_product_id_sequence_station_no(product_id,sequence,station_no),add index idx_receive_time(receive_time);select now() as ddl_stop" >>${log_file} 2>& 1

red_echo "onlie ddl stop"

red_echo "add foreign key"

mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"set sql_log_bin=0;ALTER TABLE $db_.${table_name} ADD CONSTRAINT _FK_${table_name}_product FOREIGN KEY (product_id) REFERENCES cq_new_cimiss.product (id) ON DELETE NO ACTION ON UPDATE NO ACTION;" >>${log_file} 2>& 1

check_las_comm "$?" "add foreign key error"

red_echo "add foreign key stop"

red_echo "start slave"

mysql -uroot -p$passwd --socket=/datas/mysql/data/${port}/mysqld.sock -e"start slave"

check_las_comm "$?" "start slave failed"

執行時間

2019-04-08-11:17:36 stop slave

mysql: [Warning] Using a password on the command line interface can be insecure.

ddl_start

2019-04-08 11:17:36

ddl_stop

2019-04-08 11:45:13

2019-04-08-11:45:13 onlie ddl stop

2019-04-08-11:45:13 add foreign key

2019-04-08-12:33:48 add foreign key stop

2019-04-08-12:33:48 start slave

再次檢視delete 和select語句的執行計劃

explain select count(*) from arrival_record where receive_time < STR_TO_DATE('2019-03-10', '%Y-%m-%d')\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: arrival_record
partitions: NULL
type: range
possible_keys: idx_receive_time
key: idx_receive_time
key_len: 6
ref: NULL
rows: 7540948
filtered: 100.00
Extra: Using where; Using index
explain select count(*) from arrival_record where product_id=26 and receive_time between '2019-03-25 14:00:00' and '2019-03-25 15:00:00' and receive_spend_ms>=0\G;
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: arrival_record
partitions: NULL
type: range
possible_keys: idx_product_id_sequence_station_no,idx_receive_time
key: idx_receive_time
key_len: 6
ref: NULL
rows: 291448
filtered: 16.66
Extra: Using index condition; Using where
都使用到了idx_receive_time 索引,掃描的行數大大降低           

索引優化後

delete 還是花費了77s時間

delete from arrival_record where receive_time < STR_TO_DATE('2019-03-10', '%Y-%m-%d')G

一入職!就遇到上億(MySQL)大表的優化.... 得到滿足時間條件的最大主鍵ID通過按照主鍵的順序去 順序掃描小批量删除資料先執行一次以下語句執行小批量delete後會傳回row_count(), 删除的行數程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成程式睡眠0.5s

另一個方法是通過主鍵的順序每次删除20000條記錄

得到滿足時間條件的最大主鍵ID

通過按照主鍵的順序去 順序掃描小批量删除資料

先執行一次以下語句

SELECT MAX(id) INTO @need_delete_max_id FROM

arrival_record

WHERE receive_time<'2019-03-01' ;

DELETE FROM arrival_record WHERE id<@need_delete_max_id LIMIT 20000;

select ROW_COUNT(); #傳回20000

執行小批量delete後會傳回row_count(), 删除的行數

程式判斷傳回的row_count()是否為0,不為0執行以下循環,為0退出循環,删除操作完成

DELETE FROM arrival_record WHERE id<@need_delete_max_id LIMIT 20000;

select ROW_COUNT();

程式睡眠0.5s

總結

  • 表資料量太大時,除了關注通路該表的響應時間外,還要關注對該表的維護成本(如做DDL表更時間太長,delete曆史資料)。
  • 對大表進行DDL操作時,要考慮表的實際情況(如對該表的并發表,是否有外鍵)來選擇合适的DDL變更方式。
  • 對大資料量表進行delete,用小批量删除的方式,減少對主執行個體的壓力和主從延遲。

【雲栖号線上課堂】每天都有産品技術專家分享!

課程位址:

https://yqh.aliyun.com/zhibo

立即加入社群,與專家面對面,及時了解課程最新動态!

【雲栖号線上課堂 社群】

https://c.tb.cn/F3.Z8gvnK

原文釋出時間:2020-06-30

本文作者:jia-xin

本文來自:“

網際網路架構師 微信公衆号

”,了解相關資訊可以關注“[網際網路架構師](

https://mp.weixin.qq.com/s/6OM7mi4NCsK-d9Go-qthvg