disable/drop indexes before loading, and load in batches, then enable/rebuild indexes after load. Are you eventually going to end up with 1000s of tables, or do they get dropped after that data is processed/transformed ? just a concern.
For every thirty minutes i will create a new table and the old table will be closed.
Actually i followed your suggestion disabling the index --> load --> enabling the index, that was great.Now i am able to dump 21000 rows per second.If a client query is accessing that table while loading the data it will not have any indexes so it will take a very long to process.Is there any way i can obtain the lock to the table while loading the data so it will not be available for read ?