Results 1 to 9 of 9
  1. #1
    Join Date
    Mar 2006
    Location
    Tuticorin, India
    Posts
    103

    Unanswered: Stored Proc Performance Improvement

    I have coded a stored procedure which is planned to run only once in production to process historic data from DB2 8.2 database.

    It accesses data from a partitioned table and based on some condition it would insert the result to another table and update the same row (using updatable cursor). Right now it takes around 2 minutes to process 10000 rows of data so when I did a test run against 3 Million row it took around 30 hours to complete.

    But I feel it as a concern as it has to run against 50 million rows of data in prod and that means it would run for 500 hours. Well I think that I'm missing something which causes the performance impact in my stored procedure.

    I ran explain against the 4 cursors which I'm using and all are using index scan. Also statistics are also updated and it is 100% accurate.

    The cursors which I'm using are as below
    DECLARE cur1 CURSOR WITH HOLD FOR
    SELECT col1,col2,col3
    FROM
    smt.order_detail
    WHERE
    ORDER_CSI = 1
    AND
    DWH_EFCTV_TIMSTM >=IN_LAST_RUN_TIMESTAMP OR DWH_UPD_TIMSTM >=IN_LAST_RUN_TIMESTAMP
    FOR UPDATE OF col4,col5,col6;

    DECLARE cur2 CURSOR WITH HOLD FOR
    SELECT col1,col2,col3
    FROM
    smt.order_detail
    WHERE
    ORDER_CSI in (7,8,9)
    AND
    DWH_EFCTV_TIMSTM >=IN_LAST_RUN_TIMESTAMP OR DWH_UPD_TIMSTM >=IN_LAST_RUN_TIMESTAMP
    FOR UPDATE OF col4,col5,col6;

    DECLARE cur3 CURSOR WITH HOLD FOR
    SELECT col1,col2,col3
    FROM
    smt.order_detail
    WHERE
    ORDER_CSI = 10
    AND
    DWH_EFCTV_TIMSTM >=IN_LAST_RUN_TIMESTAMP OR DWH_UPD_TIMSTM >=IN_LAST_RUN_TIMESTAMP
    FOR UPDATE OF col4,col5,col6;



    DECLARE cur4 CURSOR WITH HOLD FOR
    SELECT
    col1,col2,col3, col4,col5,col6,col7
    FROM
    smt.order_detail_lookup
    WHERE
    VIN_ATTRB_CD = IN_VIN_ENCODING_ATTRB_CD with ur;

    Can you give me any tips please.

  2. #2
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    You did not specify which OS and which FP you are using.

    My guess is the updateable cursors are your problem. Try to use a global temp table. They are really fast.

    Andy

  3. #3
    Join Date
    Mar 2006
    Location
    Tuticorin, India
    Posts
    103
    I'm sorry; Its on AIX 5.3 using DB2 UDB V8.2. Yes I'm using 3 updatable cursor and the target table is a partitioned table across 7 logical partitions.

    Can you please give me more information as how to use global temporary table in my scenario.

  4. #4
    Join Date
    Dec 2005
    Posts
    39
    Any particular reason why you want to have 3 cursors. Can't you club 3 cursors into 1 considering the only difference in filter criteria among the 3 seems to be ORDER_CSI..If you want to take particular action based on this field value can you select this as well & achieve the required behavior while traversing the cursor?

    can you try flattening the where clause for timestamp part into union of both conditions rather than using OR. in certain cases union seems to be faster than OR. also it might make sense to keep an upper limit on timestamp part either by using "between" or "<" rather than relying on plain >=. If you have the luxury you can try these out if it helps in any way.

  5. #5
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    Based on the limited inforation you gave in your original post, you are processing through your data and depending upon the value of ORDER_CSI you update col4,col5,col6 differenterly based on col1,col2,col3. You can use a Global Temp table to do the work in and then you can then apply the work to the persisted table in large blocks. Alot depends on the particulars of your situation. An example:

    DECLARE loop_counter bigint;
    DECLARE MAX_counter bigint;

    declare global temporary table mytemptable (counter bigint, order_csi ...,col1 ...,col2 ..., col3 ..., col4 ..., col5 ... col6 ...) on commit preserve rows not logged;

    insert int session.mytemptable select row_number() over (), col1,col2,col3,col4,col5,col6 from smt.order_detail where DWH_EFCTV_TIMSTM >=IN_LAST_RUN_TIMESTAMP OR DWH_UPD_TIMSTM >=IN_LAST_RUN_TIMESTAMP;

    create index mtt_indx1 on session.myindextable (order_csi asc);
    create index mtt_indx2 on session.myindextable (counter asc);

    update session.mytemptable set (col4,col5,col6) = (?,?,?) where order_csi = 1;

    update session.mytemptable set (col4,col5,col6) = (?,?,?) where order_csi between 7 and 9;

    update session.mytemptable set (col4,col5,col6) = (?,?,?) where order_csi = 10;

    --then you do the updates to the persisted table from the temp table in a loop so that you do not get locking issues.

    set loop_counter = 0;
    set max_counter = (select max(counter) from session.mytemptable;

    while (loop_counter < max_counter) do
    update smt.order_detail as x set (col4,col5,col6) = (select col4,col5,col6 from session.mytemptable where counter between loop_counter and (loop_counter + 5000)) where x.someuniquecolumn in (select someuniquecolumn from session.mytemptable where counter between loop_counter and (loop_counter + 5000));
    COMMIT WORK;
    set loop_counter = loop_counter + 5000;
    END WHILE;


    HTH

    Andy

  6. #6
    Join Date
    Mar 2006
    Location
    Tuticorin, India
    Posts
    103
    hi Andy,

    Thanks for your valuable comments; I will work on this and get back to you as soon as possible.

  7. #7
    Join Date
    Mar 2006
    Location
    Tuticorin, India
    Posts
    103
    I think that this block of code write the updated data back from temporary table to the main order_detail partitioned table. Am I right ?

    while (loop_counter < max_counter) do
    update smt.order_detail as x set (col4,col5,col6) = (select col4,col5,col6 from session.mytemptable where counter between loop_counter and (loop_counter + 5000)) where x.someuniquecolumn in (select someuniquecolumn from session.mytemptable where counter between loop_counter and (loop_counter + 5000));
    COMMIT WORK;
    set loop_counter = loop_counter + 5000;
    END WHILE;

  8. #8
    Join Date
    Jan 2003
    Posts
    4,292
    Provided Answers: 5
    Quote Originally Posted by db2udbgirl
    I think that this block of code write the updated data back from temporary table to the main order_detail partitioned table. Am I right ?

    while (loop_counter < max_counter) do
    update smt.order_detail as x set (col4,col5,col6) = (select col4,col5,col6 from session.mytemptable where counter between loop_counter and (loop_counter + 5000)) where x.someuniquecolumn in (select someuniquecolumn from session.mytemptable where counter between loop_counter and (loop_counter + 5000));
    COMMIT WORK;
    set loop_counter = loop_counter + 5000;
    END WHILE;
    Yes, that is the intention. You will have to customize it to your specific needs.

    Andy

  9. #9
    Join Date
    Aug 2001
    Location
    UK
    Posts
    4,650
    As you are using DPF hash partitioning on the base table, you may consider
    using hash partitioned Global temp tables for the operation - the part key being the same as the original table's.
    This will reduce data movement and therefore increase performance.
    Other performance improvement can come by
    a) increasing the logbuffersz db cfg.
    b) As you are doing an update on one table and inserting the original records into another in the same transaction, you my consider something like this
    WITH TEMP1 as
    (
    SELECT * FROM NEW TABLE (INSERT INTO TARGET select * from sourcetable where somecolumn between somevalue and somevalue)
    ),
    TEMP2 as
    (
    SELECT * FROM OLD TABLE (UPDATE sourcetable set x=y where somecolumn between somevalue and somevalue )
    )
    SELEct 1 from sysibm.sysdummy1

    This ensures you do 2 tasks in one query. It might perform better but very unlikely to perform worse then an INSERT follwed by an update.
    Error handling also becomes easier.

    c) Have as high 'commitcount' as practically possible. this will ensure minimal number of range selection queries from the DGTT

    HTH

    Sathyaram
    Visit the new-look IDUG Website , register to gain access to the excellent content.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •