The Below query is taking lot of time. I want to reduce CPU Cost. It is taking 6380348. Please help me out. The Columns I am using in this query will have Indexes and Primary keys.
SELECT DISTINCT ADDR.ADDR_1_DSC, ADDR.ADDR_2_DSC, ADDR.US_CITY_ID, CITY.CITY_NAM, COUNTY.COUNTY_NAM,
STATE.ADDR_STATE_CD, ADDR.ADDR_POSTAL_CD, ADDR.ADDRESS_UPDATE_DT, ADDR.DATA_SOURCE_TYPE
FROM TB_CMA006_CUSTOMER_ADDRESSES ADDR, TB_CMA040_CUSTOMER_ADDRESS_USE ADDR_USE, TB_CMA086_US_CITY CITY,
TB_CMA085_US_COUNTY COUNTY, TB_CMA004_US_STATE STATE
WHERE ADDR.CUSTOMER_MASTER_NUM = '1603246' AND ADDR_USE.CUSTOMER_MASTER_NUM = ADDR.CUSTOMER_MASTER_NUM
AND ADDR_USE.CUSTOMER_ADDRESS_CONTACT_NUM = ADDR.CUSTOMER_ADDRESS_CONTACT_NUM
AND ADDR_USE.SITE_USE_CD = 'BILL_TO' AND ADDR_USE.SITE_TYPE_PRIMARY_FLG = 'Y'
AND CITY.US_CITY_ID = ADDR.US_CITY_ID AND COUNTY.ADDR_COUNTRY_ID = CITY.ADDR_COUNTRY_ID
AND COUNTY.ADDR_STATE_ID = CITY.ADDR_STATE_ID AND COUNTY.COUNTY_ID = CITY.COUNTY_ID
AND STATE.ADDR_COUNTRY_ID = COUNTY.ADDR_COUNTRY_ID AND STATE.ADDR_STATE_ID = COUNTY.ADDR_STATE_ID
Description Obj Owner Obj Name Cost Cardinality Bytes CPU Cost Optimizer
SELECT STATEMENT, GOAL = ALL_ROWS 11 1 151 6380348 ALL_ROWS
HASH UNIQUE 11 1 151 6380348
NESTED LOOPS 10 1 151 82245
NESTED LOOPS 9 1 139 73864
NESTED LOOPS 8 1 116 64632
NESTED LOOPS 7 1 86 55361
TABLE ACCESS BY INDEX ROWID CMADBA TB_CMA040_CUSTOMER_ADDRESS_USE 5 1 24 38128 ANALYZED
INDEX RANGE SCAN CMADBA CI_CMA040_ID1 3 3 22164 ANALYZED
TABLE ACCESS BY INDEX ROWID CMADBA TB_CMA006_CUSTOMER_ADDRESSES 2 1 62 17233 ANALYZED
INDEX UNIQUE SCAN CMADBA CN_CMA006_PK 1 1 9021 ANALYZED
TABLE ACCESS BY INDEX ROWID CMADBA TB_CMA086_US_CITY 1 1 30 9271 ANALYZED
INDEX UNIQUE SCAN CMADBA CN_CMA086_PK 0 1 1900 ANALYZED
TABLE ACCESS BY INDEX ROWID CMADBA TB_CMA085_US_COUNTY 1 1 23 9231 ANALYZED
INDEX UNIQUE SCAN CMADBA CN_CMA085_PK 0 1 1900 ANALYZED
TABLE ACCESS BY INDEX ROWID CMADBA TB_CMA004_US_STATE 1 1 12 8381 ANALYZED
INDEX UNIQUE SCAN CMADBA CN_CMA004_PK 0 1 1050 ANALYZED
SQL> select count(*) FROM TB_CMA006_CUSTOMER_ADDRESSES;
SQL> select count(*) FROM TB_CMA040_CUSTOMER_ADDRESS_USE;
why don't you create views for each table you are using and use those views for your query. The reason is instead of looking on to physical database which will take much time to allocate space in memory ,use logical views to write the SQL statements . This clue might help you in future . Try it to see whether the problem is sorted out .
Example you gave makes it difficult to see the indexes on your tables and how they have been defined. The largest tables (TB_CMA040_CUSTOMER_ADDRESS_USE and TB_CMA006_CUSTOMER_ADDRESSES) should be the focal point for tuning.
The only filter criteria that you use for these 2 large tables is
ADDR.CUSTOMER_MASTER_NUM = '1603246' and
ADDR_USE.SITE_USE_CD = 'BILL_TO' AND ADDR_USE.SITE_TYPE_PRIMARY_FLG = 'Y'
so with info above, I would ensure I have a single column index on ADDR.CUSTOMER_MASTER_NUM and a concatenated index on (ADDR_USE.SITE_USE_CD , ADDR_USE.SITE_TYPE_PRIMARY_FLG )
Since there are most likely very few distinct values for the SITE_USE_CD and the SITE_TYP_PRIMARY_FLG columns , I would consider the use of a BITMAP INDEX concatenated across these 2 columns -- although bitmap indexes don't do very well with tables that have heavy insert, delete, update activity --- This looks like a datawarehouse -- so if your doing this correctly , you would have partitioned the large tables and the data loaded into those tables should be via exchange partition.
ALso - investigate forcing a hash join over the nested loops in your explain plan