I am currently maintaining a 200Gb database which is being fully refreshed every day.The reason for this full refresh is that we receive around 5 million updates every day that needs to be applied on 5 tables each of which contains around 100 Million records.Our database is to be made avalable to end users 24*7 hours.
Since database is to be fully refreshed every day the process that is followed is described below
1) Load the fresh data into dummy partitioned table(single partitioned).
We use a single range partition of MAXVALUE .At this stage there are no indexes on these Dummy tables.
2) Create indexes on the dummy partitioned tables.(The primary keys are supported with non unique indexes inorder to ensure option 4 takes no time).
3) Analyze the dummy partition tables
4) Then finally swap data and index segments on the fly by exchanging the partition of the dummy table with my non partitioned original table.
My questions are
A) After this partition exchange do i need to rebuild the indexes on
my original non partitioned table
B) Do i need to analyze my original non partitioned table and indexes.(My tests shows that the statistics that are generated while doing option no 3 gets carried over to my original table after the exchange is complete)
C) Any other faster process that can be adopted .The present process takes around 6 hours.But the end result almost no down time for the database.