I'm working on db2 8.2 FP10 and OS is Linux 2.6.9-42.ELsmp.
We have written one stored procedure which will Archive the data from Transaction table to history table will work based on date criteria.
The procedure was written on dynamic sql.
We have a transaction table having 1.2 Million records perday. It has around 40 days of records.(50 Million Total)
History table is having 388 Million records. The Archival movement process is like the following.
First Step: Create Global temporary table similar to out transaction table; It is an one time activity per one day archival.
Second Step: Create an Unique Index for the GTT similar to the Primary key of Transaction table. It is one time activity per one day archival.
Third Step: Select data from Transaction table and insert the data into Global Temporary table.
(insert into GTT select * from trantable fetch first 50000 rows only; It is going to repeat until the records moved for that date)
Fourth Step:Based on Global temporary table it will insert into History table.
(Insert into HistTable select * from GTT).
Fifth Step: Based on Global temporary table it will deleted from Transaction table.(Delete from Trantable where keycolums in
(select keycolumns from GTT));
The main problem we have is, insert into History tables is taking longer time when compared to other steps.The total process for 50000 records
is taking 10 to 20 minutes.Among that insert into history table taking 90% time.Table statistics are upto date.
Instead of using the above process is there any other process to improve my archival process.
how about instead of GTT use unload and load? You could do for entire day in one time through, instead of running 50,000 iteratively until finished. Would bring your costs down by quite a bit. Of course, your delete would have to be done in another manner.