-
Notifications
You must be signed in to change notification settings - Fork 0
Description
1. How can I monitor the progress of an import of a large .sql file?
First install pipe viewer on your OS then try something like this:
pv sqlfile.sql | mysql -uxxx -pxxxx dbname --skip-lock-tables -v --extended-insert --quick
--extended-insert: it makes sure that it is not one INSERT per line, meaning a single statement can have dozens of rows.--quick: useful when dumping large tables, by default MySQL reads the whole table in memory then dumps into a file, that way the data is streamed without consuming much memory.--no-create-info: this means only the data is being exported, no CREATE TABLE statements will be added
Reference: fast-data-import-trick
2. Bulk Data Loading Tips
When importing data into InnoDB, make sure that MySQL does not have autocommit mode enabled because that requires a log flush to disk for every insert. To disable autocommit during your import operation, surround it with SET autocommit and COMMIT statements:
SET autocommit=0;
... SQL import statements ...
COMMIT;
If you use the mysqldump option --opt, you get dump files that are fast to import into an InnoDB table, even without wrapping them with the SET autocommit and COMMIT statements.
If you have UNIQUE constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:
SET unique_checks=0;
... SQL import statements ...
SET unique_checks=1;
For big tables, this saves a lot of disk I/O because InnoDB can use its insert buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.
If you have FOREIGN KEY constraints in your tables, you can speed up table imports by turning the foreign key checks off for the duration of the import session:
SET foreign_key_checks=0;
... SQL import statements ...
SET foreign_key_checks=1;
For big tables, this can save a lot of disk I/O.
3. How can I speed up a MySQL restore from a dump file?
One can put put the commands at the top of the dump file
SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS = 0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS = 0;
And put these statements at the end of the dump file
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
sed -i '1s/^/SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS = 0;\nSET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS = 0;\n/' dumpfile.sql
echo -e "SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;\nSET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;" >> dumpfile.sql
4. Check data size
SELECT CONCAT(table_schema, '.', table_name) AS schema_table_name,
CONCAT(ROUND(table_rows / ( 1000 * 1000), 2), 'M') AS rows,
CONCAT(ROUND(data_length / ( 1024 * 1024 * 1024 ), 2), 'G') AS data_size,
CONCAT(ROUND(index_length / ( 1024 * 1024 * 1024 ), 2), 'G') AS index_size,
CONCAT(ROUND(( data_length + index_length ) / ( 1024 * 1024 * 1024 ), 2), 'G') AS total_size
FROM information_schema.TABLES
ORDER BY data_length + index_length DESC
LIMIT 10;
5. Config
innodb_write_io_threads = 16
bulk_insert_buffer_size = 4G
max_allowed_packet = 1G
net_buffer_length = 1M
read_buffer_size = 1G
connect_jvm_path = /usr/lib/jvm/java-8-oracle/jre/lib/amd64/server
connect_class_path = /usr/lib/jvm/java-8-oracle/jre/lib/ext/
event_scheduler = OFF
innodb_doublewrite = OFF
innodb_buffer_pool_size = 8G
innodb_flush_log_at_trx_commit = 0
Dump:
mysqldump -uxxxxx -pxxxxx -hxxxxx --lock-tables=false --databases xxxxx --ignore-table=xxxxx.xxxxx > file_dump.sql
Restore:
pv file_dump.sql | mysql -uxxxxx -pxxxxx -hxxxxx -Pxxxxx
Reference:
[1] https://www.experts-exchange.com/questions/29078089/Speed-Up-MySQL-Import-Speed.html
[2] https://blog.gabriela.io/2016/05/17/fast-data-import-trick/
[3] https://dba.stackexchange.com/questions/150962/mysql-settings-useful-to-speed-up-a-mysqldump-import
[4] https://www.percona.com/blog/2006/08/04/innodb-double-write/?fbclid=IwAR1qBxtwkB3CDoQsVnmRbhLJKnpp792yAsBq9raJYMtGM4IadW5p9AONy_M