Unanswered: Some MySQL help with lots of data on import
We’re using MySQL 5.6 in RDS. The app controls some ETL process which imports millions of records. Basically it bulk loads data into staging tables and then does a merge.
1. What db params should be applied to db instance to make huge amount of data inserts/updates/deletes.
2. Any index operations (create/drop) on big tables take too much time. On 24 million records table it takes more than half an hour to create an index. I did the same on similar tables in Sql Server and it took only 1-2 minutes. Any db params for indexes?
3. Does it make sense to drop/recreate indexes on tables while merging data from staging tables in ETL. Currently we have an SP for merge with the following structure:
- Drop index in target table
- Merge (Insert/Update/Delete) data into target table
- Recreate index
Create index statement timed out recently. Thinking on removing drop/recreate index.
Target table 20+ millions records. Above said index is idx_fact_usages_common.