The MySQL provides a great tool mysqdump to dump database. Its the official sql dump utitlity for MySQL database. It makes the life of dba so easy that he can backup and restore the database within just two commands. But sometimes due to the lacking of the infrastructure you can not dump and restore that easily. Specially when you are dealing with huge amount of data. Our database grows over time. Few hundred GBs are quite common. But if you run a software for long it might get in to Tera byte range. The problem starts when size is this huge.
Split the big tables by tables.
mysqldump db1 table1 table2 table3 > table1-3.sql
Split the big tables by rows with –where.
-w, --where=name Dump only selected records. Quotes are mandatory.
If your table has auto column typed with
auto_increment you can split by any number of chunks.
mysqldump --where "id%2=0" db1 table1 > table1_even.sql mysqldump --where "id%2=1" db1 table1 > table4_odd.sql
# Dump the first 100000 rows from the table named table5 into the file dump.sql # Use LIMIT [OFFSET, ] LIMIT So some thing like that: mysqldump --where "1 LIMIT 10000" database1 table5 > dump.sql
Others. As the –where switch allows any sql condition you can use any criteria.
mysqldump --where "year(date) <= 2008" db1 payment > payment_prior_2009.sql
Some useful options:
–skip-add-drop-table Avoid generate the
DROP TABLE statements.
–skip-create-options Disable include all MySQL specific create options.
–replace Use REPLACE INTO instead of INSERT INTO.