Actually we have partition tables maintained in our data warehouse. It stores 15 months of data. We have housekeeping pl/sql scripts to drop/create new partition as new data comes in.
Now, theres a requirement from management to keep financial year end data for another 15 months , meaning to say if our financial year end is on july, we want to store july data from now up to the next 15 years and then add/drop data as new data comes in. and user will access this data whenever they wish.
So, what i did was:
1) i created new tables for each tables that requires the financial year data to be backed up
2) Wrote insert append script to insert data from base table into financial year end table
3) Wrote houseekeeping scripts to maintain the financial year end tables
Im not sure if this is the best approach. but now im having a problem, that is
1) If user makes some table structure changes, it has to be applied to both the base table and financial year end table, otherwise loading will fail, because for each copy process, i want all fields to be copied.
Of course this can eliminated by writing one additional script to read the dictionary and to make sure the table stucture is the same before copying data.
But it this the right way to do it. I find it abit weird, and abit messy.. unprofessional
thanks for the link to the article, was very informative..
actually, i have one small question, normally when you move data from one table to another, for instance from base to sub table, lets say you want to make sure whatever new column created in base is replicated to sub table, to ensure data is copied in full, is there any way to detect and struture changes on the base table so that before the copy process kicks off, we are sure that the number of fields and other database objects (indexes,constraints) are exactly the same