Hi,

I am a newbie to data warehouses and I have an excruciatingly painful variance report from a normalized db that takes 4min 15 sec to run (about 1000 rows). Eventually, I'm going to need a separate report for each dept in a 25+ dept company, which means almost 2 hours of processing.

This report compares 2 tables (budget & actual) for several different accounts for month-to-date, quarter-to-date, and year-to-date. So the report looks like this:

mtd act mtd bud mtd var qtd act qtd bud qtd var ...
Dept 01
Employee 5
GL 7100 ##### ###### ##### ##### ##### #####
GL 7700 ##### ###### ##### ##### ##### #####


Both the budget and actual data sources are intermediary 'denormalized' tables that list each employee and GL amount for each payperiod (bimonthly). So the sample above is actually pulling from ALL records of all GL codes for each employee (72000 records for each data source table). A sample of the 'actual cost' source data is:

Employee 5 | Dept 01 | 1/15/2004 | GL 7100 | #####
Employee 5 | Dept 01 | 1/30/2004 | GL 7100 | #####

I know that a data mart / warehouse basically denormalizes and aggregates data to speed up these kinds of reports. Are my intermediary tables accomplishing this, or is there an even better way to speed up this excruciatingly slow report?

Is there a good source somewhere for strategies on this?

The most timeconsuming query in the report process is a query that joins the YTD Var query to the QTD and MTD Var queries. Individually, these subqueries take 10 seconds, but when the YTD is querying the other 2 with all relationships, the result takes 4 mins.

Thanks for any help.