天天看點

mysql 修改索引順序_對于mysql加索引,删除索引,添加列,删除列,修改列順序的最佳辦法測試...

1、首先進行資料訓的XltraBackup備份,有備無患,切記切記!

2、

mysql -uroot -pD********

-- 導出csv檔案

use dsideal_db;

MariaDB [dsideal_db]> SELECT * from t_resource_info INTO OUTFILE "/usr/local/mysql/t_resource_info.txt" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n';

Query OK,1582463 rows affected (29.97sec)

3、切分csv檔案,這樣分批導入更快,更友善,參考這裡:http://www.nowamagic.net/librarys/veda/detail/2495,但是不能使用按容量啊,一定要按行。說日志太大拿不回來的,罰面壁一小時!

mkdir/usr/local/huanghai -p

split-a 2 -d -l 50000 /usr/local/mysql/t_resource_info.txt /usr/local/huanghai/prefix2-3秒吧

3、清空原表,修改字段,反正有備份,不怕的

truncate t_resource_info;alter table t_resource_info add huanghai_test int;

4、優化環境配置,準備開始導入

SET autocommit=0;

SET unique_checks=0;

SET foreign_key_checks=0;set sql_log_bin=0;

SET @innodb_additional_mem_pool_size=26214400;set @innodb_buffer_pool_size=1073741824;set @innodb_log_buffer_size=8388608;set @innodb_log_file_size=268435456;

load data infile'/usr/local/huanghai/prefix00' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix01' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix02' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix03' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix04' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix05' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix06' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix07' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix08' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix09' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix10' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix11' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix12' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix13' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix14' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix15' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix16' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix17' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix18' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix19' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

commit;

5、恢複現場

SET autocommit=1;

SET unique_checks=1;

SET foreign_key_checks=1;set sql_log_bin=1;

6、建議使用python3進行開發一個程式,這樣友善串連起來,這是我目前能想到的最好辦法,基本無風險,速度上基本能利用磁盤的最大IO,不建議采用修改frm等暴力辦法,那個對于最後面追加字段的可能還行,對于字段在中間的

可能就是災難,而且沒有辦法程式化,這個辦法是用python3開發起來,基本無困難。

補充一下測試結果,但這台機器實在是太NB了,可能一般的客戶沒有這樣的條件,供參考吧:

測試表:

t_resource_info

記錄個數:1582937

一、生成

[[email protected] TestLoadFile]# python3 ExportData.py

2017-11-05 17:03:57      成功建立工作目錄!

2017-11-05 17:03:59      開始導出資料...

2017-11-05 17:04:29      成功導出資料!

2017-11-05 17:04:29      正在進行分割...

2017-11-05 17:04:32      成功進行分割!

導出需要35秒

二、重新導入

[[email protected] TestLoadFile]# python3 ImportData.py

2017-11-05 16:58:08,378 INFO    : 開始生成SQL腳本...

2017-11-05 16:58:08,380 INFO    : 成功生成SQL腳本!

2017-11-05 16:58:08,380 INFO    : 開始執行SQL腳本...

2017-11-05 16:59:27,223 INFO    : SQL腳本執行成功!

導入需要79秒

合計需要114秒。

===================================================================================================

測試用機:

實體機,4顆CPU

cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l

4

CPU核數:

cat /proc/cpuinfo| grep "cpu cores"

邏輯處理器個數:

cat /proc/cpuinfo| grep "processor"| wc -l

64

cpu 型号:

[[email protected] TestLoadFile]# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c

64  Intel(R) Xeon(R) CPU E7-4809 v4 @ 2.10GHz

記憶體:

cat /proc/meminfo

MemTotal:       65845352 kB

===================================================================================================

附10.10.14.224測試用例

[[email protected] TestLoadFile]# python3 ExportData.py

2017-11-06 07:51:14      成功建立工作目錄!

2017-11-06 07:51:14      開始導出資料...

2017-11-06 07:53:12      成功導出資料!

2017-11-06 07:53:12      正在進行分割...

2017-11-06 07:53:27      成功進行分割!

You have new mail in /var/spool/mail/root

[[email protected] TestLoadFile]# python3  ImportData.py

2017-11-06 07:55:37,622 INFO    : 開始生成SQL腳本...

2017-11-06 07:55:37,629 INFO    : 成功生成SQL腳本!

2017-11-06 07:55:37,630 INFO    : 開始執行SQL腳本...

2017-11-06 08:07:40,093 INFO    : SQL腳本執行成功!

===================================================================================================

附:測試用例 連結:http://pan.baidu.com/s/1dFbCEIl 密碼:75j5