Results 1 to 14 of 14
  1. #1
    Join Date
    May 2004
    Posts
    42

    Unanswered: Performance Tuning..

    Hi
    Can anybody help me out in solving the following problem..

    The following SQL Sattement sometimes takes 300 Milli Seconds and sometimes 3 Seconds. I just want to fix the execution time.

    SELECT /*+ INDEX (NAD_SOURCE_NAMES_ADDRESSES PK_NAD_SOURCE_NAMES_ADDRESSES) */ TR_SUB_PREM_KEY, CIR_KEY,CIR_TYPE,
    po_box, sub_premise,premise_name,
    thoroughfare_number,dependant_thoroughfare,
    thoroughfare, double_depend_locality,
    locality,LTRIM(RTRIM(SUBSTR(source_postcode,1,DECO DE((INSTR(source_postcode,' ')-1),-1,4,(INSTR(source_postcode,' ')-1))))),
    LTRIM(RTRIM(SUBSTR(source_postcode,DECODE((INSTR(s ource_postcode,' ')+1),1,5,(INSTR(source_postcode,' ')+1))))),
    post_town, county,CHERISHED_NAME,Source_address_structure,
    country,address_status,org_name,'NAD_SOURCE_NAMES_ ADDRESSES'
    FROM NAD_SOURCE_NAMES_ADDRESSES
    WHERE NAD_SOURCE_NAMES_ADDRESSES.TR_SUB_PREM_KEY='R00000 016682'
    and EXISTS (SELECT SOURCE_TYPE FROM TAB1).

    I have the index on NAD_SOURCE_NAMES_ADDRESSES on Column Names Tr_Sub_Prem_Key, Source_Type.

    The following sections describes the Explain Plan Statistics...

    Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

    SELECT STATEMENT Optimizer Mode=FIRST_ROWS 1 5
    FILTER
    TABLE ACCESS BY INDEX ROWID NAD_OWNER_DEV.NAD_SOURCE_NAMES_ADDRESSES 1 97 5
    INDEX RANGE SCAN NAD_OWNER_DEV.PK_NAD_SOURCE_NAMES_ADDRESSES 1 4
    TABLE ACCESS FULL NAD_OWNER_DEV.TAB1 4 K 11



    Thanks In advance

    Thanks & Regards
    Rasmi Ranjan Moharathy

  2. #2
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Two most likely possibilites

    1) NAD_SOURCE_NAMES_ADDRESSES.TR_SUB_PREM_KEY='R00000 016682'
    Does the value change and if so does it return a different number of rows depending on this value. It might be the one taking 3 seconds returns lots of rows while the one returning in milliseconds is returning kjust a few rows.

    2) You server is under variable load so depending on how busy your server is affects your query response times.

    Alan

  3. #3
    Join Date
    May 2004
    Posts
    42
    Thanks AlanP
    Got your point. But point number 1 (Changes during retreival) is not satisfied by us as there is no change to our database during retreival. So what can i do. Alomost all the statements are tuned in my package. So what should i do??

  4. #4
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    I think you misunderstood my point, does the query return a different number of rows depending on the value of NAD_SOURCE_NAMES_ADDRESSES.TR_SUB_PREM_KEY

    Alan

  5. #5
    Join Date
    May 2004
    Posts
    42
    Hi
    No The quesry is not returning different number of rows depending on the value of NAD_SOURCE_NAMES_ADDRESSES.TR_SUB_PREM_KEY.

    I am writing you in details here.

    I have a Package which does a search on address table (table contains
    50 million rows).After getting the result these are inserted into a global
    temporary table ( on commit preserve rows) then several manipulations are done
    as per business rules and the final data is prepared in this GTT.This final
    result is sent back to the calling program through a reference cursor.Our time
    limit for completing the entire process is 200 milliseconds and we are able to
    acheive this in a single user environment . But when the number of users
    increses (say 25 concurrent users) the final select statement which sends the
    data back through a referenece cursor takes around 4 seconds to complete.The
    select statement is as follows

    OPEN v_delete_output FOR select tr_sub_prem_key as
    v_address_key,source_key as v_source_key,
    address_status as v_address_status,
    tr_org_name as v_org_name,
    tr_po_box as v_po_box,
    tr_sub_premise as v_sub_premise,
    tr_premise_name as v_premise_name,
    tr_thoroughfare_number as v_thoroughfare_number,
    decode(source_address_structure,'S','S','U','U',' ') as
    v_source_address_structure,
    order_status from tab4 where
    order_status = '1' and rownum=1


    Here Tab4 is the global temporary table and v_delete_output is the reference
    cursor.
    Tom why is this difference in performance when the number of users increases .Is
    this anything to do with using Global Temporary table.
    Can the performance be improved in any way.

    Some new information
    cpu_count integer 6
    parallel_automatic_tuning boolean FALSE
    parallel_max_servers integer 5

    parallel_min_percent integer 0

    parallel_min_servers integer 0
    sga_max_size big integer 135033128
    shared_pool_reserved_size big integer 5033164



    Thanks in advance.

    Regards
    Rasmi

  6. #6
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Things to consider

    1. Are you using MTS as this does have some overhead and may slow you down. Try using dedicated servers provided the number of connections isnt too high.
    2. If you have a large number of users going in on one table is it partitioned as this will help.
    3. How many records are there in the GTT for your session (not for all sessions). If you have a reasonable number an index may help. Also ' where order_status = '1' and rownum=1' is a bit strange as it will only look at one record regardless of order_status.
    4. You SGA max size doesnt look very big. How many connections do you expect and how much RAM do you have. Whats your PGA and buffer cache size. Use the keep/recycle buffer pools.
    5. Ensure your undo tablespace is on it own disk/disk stripe.
    6. Run your procedure with tracing on to see if it does more physical reads when its busy. If so it could be you just need to increase the size of your buffer pools.
    7. How do you know its the last select which takes so much time? and not the earlier statments in the stored proc.

    Alan

  7. #7
    Join Date
    Jul 2003
    Posts
    2,296
    you don't need to match any value to the table in your EXISTS clause?
    any value in that table will do fine?

    I think you need to match:
    PHP Code:
    AND EXISTS (
      
    select NULL
       from tab1 
        where tab1
    .source_type nad_source_names_addresses.source_type
    Last edited by The_Duck; 08-02-04 at 11:49. Reason: added another question
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  8. #8
    Join Date
    Jul 2003
    Posts
    2,296
    how about this? I cleaned up your rtrim/ltrim lines as well.
    it looked like you just wanted strings before and after a space (' ')
    PHP Code:
    SELECT /*+ INDEX (NAD_SOURCE_NAMES_ADDRESSES PK_NAD_SOURCE_NAMES_ADDRESSES) */
           
    tr_sub_prem_keycir_keycir_typepo_boxsub_premisepremise_name,
           
    thoroughfare_numberdependant_thoroughfare
           
    thoroughfaredouble_depend_locality,  locality,
           
    trim(substr(source_postcode,1instr(source_postcode' '))),
           
    trim(substr(source_postcode,instr(source_postcode' '), length(source_postcode))),
           
    post_towncountycherished_name
           
    source_address_structurecountry,
           
    address_statusorg_name'NAD_SOURCE_NAMES_ADDRESSES'
      
    FROM nad_source_names_addresses
     WHERE nad_source_names_addresses
    .tr_sub_prem_key 'R00000 016682'
       
    AND EXISTS (
                
    select null 
                from tab1 
                where tab1
    .source_type nad_source_names_addresses.source_type
    I would be interested to see what the CBO does if you remove the hint.
    sould you post explain plan and statistics running both ways??
    Last edited by The_Duck; 08-02-04 at 12:13.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  9. #9
    Join Date
    May 2004
    Posts
    42
    Hi
    Thanks for your reply. Actually i was wrong to inform you one more thing. In my package I have used 16 to 17 EXECUTE Immediate Statements for data manipulations in Global Temporay Table. The thing is during concurrent running there if no fixed time in execution for every statement. I mean sometimes it takes time after 2nd Statement and sometimes after 4th statement and so on. Strangely maximum time is during passing putput thru ref cursor which contains only one row. Please find the document attached and go through if you have time to do so.

    Please find the file attached for more information...

    Thanks in advance
    Regards
    Rasmi
    Attached Files Attached Files

  10. #10
    Join Date
    May 2004
    Posts
    42
    Hi Alan

    Please find below the answers to your queries.

    1. Are you using MTS as this does have some overhead and may slow you down. Try using dedicated servers provided the number of connections isnt too high.
    Ans: No we are not using MTS.

    2. If you have a large number of users going in on one table is it partitioned as this will help.
    Ans: The GTT Is Not Partitioned.

    3. How many records are there in the GTT for your session (not for all sessions). If you have a reasonable number an index may help. Also ' where order_status = '1' and rownum=1' is a bit strange as it will only look at one record regardless of order_status.
    Ans: This is our requirement as we return only one record having lowest Order Status. The Status may be 1,2,3,4 etc and there can be multiple records for one status as well.

    4. You SGA max size doesnt look very big. How many connections do you expect and how much RAM do you have. Whats your PGA and buffer cache size. Use the keep/recycle buffer pools.
    Ans:
    sga_max_size big integer 135033128
    pga_aggregate_target big integer 0
    total PGA inuse 24.6 Mbytes
    total PGA allocated 42.1 Mbytes
    maximum PGA allocated 47.2 Mbytes

    5. Ensure your undo tablespace is on it own disk/disk stripe.

    6. Run your procedure with tracing on to see if it does more physical reads when its busy. If so it could be you just need to increase the size of your buffer pools.

    Other Usable Parameters are as follows.

    db_block_buffers integer 2000
    db_keep_cache_size big integer 0
    buffer_pool_keep string
    buffer_pool_recycle string

    mts_circuits integer 0
    mts_dispatchers string
    mts_listener_address string
    mts_max_dispatchers integer 5
    mts_max_servers integer 20
    mts_multiple_listeners boolean FALSE
    mts_servers integer 0
    mts_service string nad_dev
    mts_sessions integer 0

    Thanks In Advance.

    Anticipating a reply from your side.

    Regards
    Rasmi

  11. #11
    Join Date
    Jul 2003
    Posts
    2,296
    FYI -
    TRIM = ltrim(rtrim(x))

    ie:
    PHP Code:
    SELECT LTRIM(RTRIM(v_alk))
    INTO   v3_alk
    FROM   DUAL
    ;
                  
    IF 
    LTRIM(RTRIM(v_source_flag)) = 'Y' THEN         -- 201 
    could be converted into:
    PHP Code:
    v3_alk := trim(v_alk);
                  
    IF 
    trim(v_source_flag) = 'Y' 
    THEN         -- 201 
    you don't need all those execute immediate statements if you are running DML.
    execute immediate is used mainly for DDL
    Last edited by The_Duck; 08-03-04 at 10:30.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  12. #12
    Join Date
    Jul 2003
    Posts
    2,296
    instead of lines 26-34 and line 91 below:

    EXECUTE IMMEDIATE v_sna_statement||v3_alk||''''||' AND EXISTS (SELECT SOURCE_TYPE FROM TAB1) ';

    could be substituted for this:
    PHP Code:
    INSERT INTO TAB3 (
        
    tr_sub_prem_key
        
    source_key,
        
    source_type,
        
    tr_po_box
        
    tr_sub_premise,
        
    tr_premise_name,
        
    tr_thoroughfare_number,
        
    tr_depend_thoroughfare,
        
    tr_thoroughfare,
        
    tr_double_depend_locality,
        
    tr_locality,
        
    tr_postal_outcode,
        
    tr_postal_incode,
        
    tr_town,tr_county,
        
    cherished_name,
        
    source_address_structure,
        
    tr_country_name,
        
    address_status,
        
    tr_org_name,
        
    source_table)
    SELECT /*+ PARALLEL (NAD_SOURCE_NAMES_ADDRESSES,4) */ 
        
    TR_SUB_PREM_KEY
        
    CIR_KEY,CIR_TYPE,
        
    po_box
        
    sub_premise,
        
    premise_name,
        
    thoroughfare_number,
        
    dependant_thoroughfare,
        
    thoroughfare
        
    double_depend_locality,
        
    trim(substr(source_postcode,1instr(source_postcode' '))), 
        
    trim(substr(source_postcode,instr(source_postcode,   ' '), length(source_postcode))), 
        
    post_town
        
    county,
        
    CHERISHED_NAME,
        
    Source_address_structure,
        
    country,
        
    address_status,
        
    org_name,
        
    'NAD_SOURCE_NAMES_ADDRESSES'
    FROM 
        NAD_SOURCE_NAMES_ADDRESSES
    WHERE 
        NAD_SOURCE_NAMES_ADDRESSES
    .TR_SUB_PREM_KEYv3_alk
    AND EXISTS (SELECT NULL FROM TAB1 where tab1.source_type NAD_SOURCE_NAMES_ADDRESSES.source_type); 
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  13. #13
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    2. If you have a large number of users going in on one table is it partitioned as this will help.
    Ans: The GTT Is Not Partitioned.

    But what about the tables your selecting from to populate your GTT. Presumably they have large number of records so may benefit from partitioning (aswell as helping parallel query if your using it).

    3. How many records are there in the GTT for your session (not for all sessions). If you have a reasonable number an index may help. Also ' where order_status = '1' and rownum=1' is a bit strange as it will only look at one record regardless of order_status.
    Ans: This is our requirement as we return only one record having lowest Order Status. The Status may be 1,2,3,4 etc and there can be multiple records for one status as well.

    If your saying the max number of records ever in the table is less than a hundred for your session then forget the index idea it will just add overhead.
    4. You SGA max size doesnt look very big. How many connections do you expect and how much RAM do you have. Whats your PGA and buffer cache size. Use the keep/recycle buffer pools.
    Ans:
    sga_max_size big integer 135033128
    pga_aggregate_target big integer 0
    total PGA inuse 24.6 Mbytes
    total PGA allocated 42.1 Mbytes
    maximum PGA allocated 47.2 Mbytes

    OK this does look small and may be affecting you. If your database is the only thing running on the server your sga_max_size+pga_aggregate_target should be using well over 50% of the available RAM and probably upto 80% depending on OS overhead.

    Set pga_aggregate_target as it is recommended by oracle. Try assigning 100MB to start with and monitor the v$pgastat.

    Then allocate RAM for the keep and recycle pools. Assign small lookup tables to the KEEP pool and very large tables to the recycle.


    5. Ensure your undo tablespace is on it own disk/disk stripe.

    Optimize the disks for the undo tablespace, so allocate multiple datafiles onto their own disk/stripe as this could be a bottleneck.

    Lookup statspack on google and setup a job to run it regularly as this has plent of diagnostic info which will also help you.

    Alan

    P.S. You debugging lines which record the time and then do commit will affect your response times. Try removing all the commits except the last one as this will give a better feel for the timings.

  14. #14
    Join Date
    May 2004
    Posts
    42
    Thanks AlanP for your valuable suggestions. I will remove all the commits and test the execution time after the necessary settings in parameter file.

    Thanks & Regards
    Rasmi

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •