天天看点

mysql 修改索引顺序_对于mysql加索引,删除索引,添加列,删除列,修改列顺序的最佳办法测试...

1、首先进行数据训的XltraBackup备份,有备无患,切记切记!

2、

mysql -uroot -pD********

-- 导出csv文件

use dsideal_db;

MariaDB [dsideal_db]> SELECT * from t_resource_info INTO OUTFILE "/usr/local/mysql/t_resource_info.txt" FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n';

Query OK,1582463 rows affected (29.97sec)

3、切分csv文件,这样分批导入更快,更方便,参考这里:http://www.nowamagic.net/librarys/veda/detail/2495,但是不能使用按容量啊,一定要按行。说日志太大拿不回来的,罚面壁一小时!

mkdir/usr/local/huanghai -p

split-a 2 -d -l 50000 /usr/local/mysql/t_resource_info.txt /usr/local/huanghai/prefix2-3秒吧

3、清空原表,修改字段,反正有备份,不怕的

truncate t_resource_info;alter table t_resource_info add huanghai_test int;

4、优化环境配置,准备开始导入

SET autocommit=0;

SET unique_checks=0;

SET foreign_key_checks=0;set sql_log_bin=0;

SET @innodb_additional_mem_pool_size=26214400;set @innodb_buffer_pool_size=1073741824;set @innodb_log_buffer_size=8388608;set @innodb_log_file_size=268435456;

load data infile'/usr/local/huanghai/prefix00' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix01' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix02' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix03' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix04' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix05' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix06' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix07' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix08' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix09' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix10' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix11' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix12' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix13' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix14' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix15' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix16' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix17' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix18' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

load data infile'/usr/local/huanghai/prefix19' IGNORE into table dsideal_db.t_resource_info_huanghai fields terminated by ',' enclosed by '"';

commit;

5、恢复现场

SET autocommit=1;

SET unique_checks=1;

SET foreign_key_checks=1;set sql_log_bin=1;

6、建议使用python3进行开发一个程序,这样方便串连起来,这是我目前能想到的最好办法,基本无风险,速度上基本能利用磁盘的最大IO,不建议采用修改frm等暴力办法,那个对于最后面追加字段的可能还行,对于字段在中间的

可能就是灾难,而且没有办法程序化,这个办法是用python3开发起来,基本无困难。

补充一下测试结果,但这台机器实在是太NB了,可能一般的客户没有这样的条件,供参考吧:

测试表:

t_resource_info

记录个数:1582937

一、生成

[[email protected] TestLoadFile]# python3 ExportData.py

2017-11-05 17:03:57      成功创建工作目录!

2017-11-05 17:03:59      开始导出数据...

2017-11-05 17:04:29      成功导出数据!

2017-11-05 17:04:29      正在进行分割...

2017-11-05 17:04:32      成功进行分割!

导出需要35秒

二、重新导入

[[email protected] TestLoadFile]# python3 ImportData.py

2017-11-05 16:58:08,378 INFO    : 开始生成SQL脚本...

2017-11-05 16:58:08,380 INFO    : 成功生成SQL脚本!

2017-11-05 16:58:08,380 INFO    : 开始执行SQL脚本...

2017-11-05 16:59:27,223 INFO    : SQL脚本执行成功!

导入需要79秒

合计需要114秒。

===================================================================================================

测试用机:

物理机,4颗CPU

cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l

4

CPU核数:

cat /proc/cpuinfo| grep "cpu cores"

逻辑处理器个数:

cat /proc/cpuinfo| grep "processor"| wc -l

64

cpu 型号:

[[email protected] TestLoadFile]# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c

64  Intel(R) Xeon(R) CPU E7-4809 v4 @ 2.10GHz

内存:

cat /proc/meminfo

MemTotal:       65845352 kB

===================================================================================================

附10.10.14.224测试用例

[[email protected] TestLoadFile]# python3 ExportData.py

2017-11-06 07:51:14      成功创建工作目录!

2017-11-06 07:51:14      开始导出数据...

2017-11-06 07:53:12      成功导出数据!

2017-11-06 07:53:12      正在进行分割...

2017-11-06 07:53:27      成功进行分割!

You have new mail in /var/spool/mail/root

[[email protected] TestLoadFile]# python3  ImportData.py

2017-11-06 07:55:37,622 INFO    : 开始生成SQL脚本...

2017-11-06 07:55:37,629 INFO    : 成功生成SQL脚本!

2017-11-06 07:55:37,630 INFO    : 开始执行SQL脚本...

2017-11-06 08:07:40,093 INFO    : SQL脚本执行成功!

===================================================================================================

附:测试用例 链接:http://pan.baidu.com/s/1dFbCEIl 密码:75j5