I have to do data reporting for an application (LIMS Sample Manager) that puts data into one set of tables initially, then, after 20 days, moves the data to an identical set of 'committed' tables. I need the reports to transparently contain data from both sets of data.

Of course, I first tried just creating union views between each of the corresponding pair of tables and reporting from the views, but that killed performance completly.

We've decided to have a separate report server database with one set of tables; each table being maintained to contain all of the data from its corresponding 'live' and 'committed' table on the application server.

I thought I might be able to do this with MViews, but the following test seems to tell me that I can't:

******************************************
Create Table Test1 (
F1 varchar2(10),
F2 varchar2(10),
constraint Test1_PK primary key(F1)
);

Create Table Test2 (
F1 varchar2(10),
F2 varchar2(10),
constraint Test2_PK primary key(F1)
);

Create materialized view log on Test1 with RowID;
Create materialized view log on Test2 with RowID;

Create materialized view Test_All
Build Immediate
Refresh Fast On Demand With RowID
As
Select
T.*,
RowID
From Test1 T
Union
Select
T.*,
RowID
From Test2 T;

ORA-12015: cannot create a fast refresh materialized view from a complex query
**********************************************

Is there any way to make fast refresh MViews work with a union query? I suspect that there isn't, but I hate to give up on the easy solution until I have to.

I could make it work with 'complete refresh', but, as the accumulated data increases, the overhead of copying all of the data everytime I want to refresh the MView would be unacceptable.

My next approach would be to create the materialized view logs but not the MViews and write a service that reads the logs and manually populates the report server tables.

Question: does anyone know how to read the log files (or is there a utility for this purpose)? Here's an example:

************************************************** ****
SQL> describe mlog$_Test1;
Name Null? Type
----------------------------------------- -------- ------------
F1 VARCHAR2(10)
SNAPTIME$$ DATE
DMLTYPE$$ VARCHAR2(1)
OLD_NEW$$ VARCHAR2(1)
CHANGE_VECTOR$$ RAW(255)

SQL> Select * from mlog$_Test1;

F1 SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
-----------------------------------
one 01-JAN-00 I N
FE

three 01-JAN-00 I N
FE

three 01-JAN-00 U U
04


F1 SNAPTIME$ D O
---------- --------- - -
CHANGE_VECTOR$$
-----------------------------------
one 01-JAN-00 D O
00
************************************************** **

Everything makes sense; I can tell what the primary key is and if it is an insert, update, or delete; except that the data values must be in the CHANGE_VECTOR$$ field. Is there a way to decode this field? (Also asking this question in a separate post.)

My last and least desirable option would be to put on insert, update, and delete triggers on all of the production tables to write every action into my own custom log tables; then read these custom log tables using a service as described above. This may be where I wind up at, however.

Appreciate all ideas / input.