Results 1 to 11 of 11
  1. #1
    Join Date
    Dec 2003
    Location
    Karachi, Pakistan
    Posts
    45

    Red face Unanswered: Query Performance

    Hello All

    I try to Insert 19.9 million data in one table, from my temporary staging schama to prodction schema. it takes around 4 hours to execute. and around 200 insert statements will be run.

    Any body have solution to imporve performace of insert statement. because all of insert statements required 400 hours to run. which is not suitable for me.

    Immediate reply will be highly appriciated.

    Noor Ali

  2. #2
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    1) Are you using the APPEND hint? :-

    INSERT /*+ APPEND */ INTO xxx SELECT ...;

    2) It is quicker to insert into a table with no indexes or constraints and then rebuild the indexes and constraints. (But beware that if your data isn't 100% clean then you are going to have a rough time reinstating constraints!)

    3) You can use NOLOGGING mode to speed up processing, but then you must perform a backup afterwards.

    Try searching AskTom for advice, e.g. this.

  3. #3
    Join Date
    Dec 2003
    Location
    Karachi, Pakistan
    Posts
    45

    I already do this

    Hi Adrew

    Thanks for your reply,

    I already do your given suggestion except only 1. Actaully its my new insertion so do u think that APPEND hint help me. what my understanding about APPEND hint is that it just ignore the vacant position in the tablespace and start writing from the first available place in tablespace.

    Noor Ali



    Quote Originally Posted by andrewst
    1) Are you using the APPEND hint? :-

    INSERT /*+ APPEND */ INTO xxx SELECT ...;

    2) It is quicker to insert into a table with no indexes or constraints and then rebuild the indexes and constraints. (But beware that if your data isn't 100% clean then you are going to have a rough time reinstating constraints!)

    3) You can use NOLOGGING mode to speed up processing, but then you must perform a backup afterwards.

    Try searching AskTom for advice, e.g. this.

  4. #4
    Join Date
    Apr 2003
    Location
    Greenville, SC (USA)
    Posts
    1,155
    Try running statspack and take a snapshot of the insertion process.
    This will show you where the "WAITS" truely are. Do you have other constraints that are having to be checked for each insert ?? Do you have
    triggers that are firing ?? What is the insert statement that you are using ..
    What version of Oracle are you running?

    Gregg

  5. #5
    Join Date
    Dec 2003
    Location
    Karachi, Pakistan
    Posts
    45

    Check it

    Hi

    No there is no any triggers. and my insert statement is given below

    INSERT INTO CI_BILL_CHAR (
    BILL_ID
    ,CHAR_TYPE_CD
    ,SEQ_NUM
    ,VERSION
    ,CHAR_VAL
    ,ADHOC_CHAR_VAL
    ,CHAR_VAL_FK1
    ,CHAR_VAL_FK2
    ,CHAR_VAL_FK3
    ,CHAR_VAL_FK4
    ,CHAR_VAL_FK5
    ,SRCH_CHAR_VAL
    )
    SELECT
    Index_Table_Cinda.Customer_No_Ten||
    row_number() over (partition by customer_history.customer_no order by customer_history.customer_no, customer_history.bill_month) BILL_ID
    ,'BILL-MTH' CHAR_TYPE_CD
    ,'1' SEQ_NUM
    ,'100' VERSION
    ,to_Char(Customer_History.Bill_Month,'MMYYYY') CHAR_VAL
    ,' ' ADHOC_CHAR_VAL
    ,' ' CHAR_VAL_FK1
    ,' ' CHAR_VAL_FK2
    ,' ' CHAR_VAL_FK3
    ,' ' CHAR_VAL_FK4
    ,' ' CHAR_VAL_FK5
    ,' ' SRCH_CHAR_VAL
    FROM
    DUAL2
    ,CUSTOMER_HISTORY Customer_History
    ,CUSTOMER_BASE Customer_base
    ,FIELDSFORCCB FieldsForCCB
    ,INDEX_TABLE_CINDA Index_Table_cinda
    Where Customer_Base.Unit_Code = Index_Table_Cinda.Unit_Code
    And Customer_Base.Customer_no = Index_Table_Cinda.Customer_no_Nine
    And Customer_Base.Unit_Code = FieldsForCCB.Unit_Code
    And Customer_Base.Customer_no = FieldsForCCB.Customer_no
    And customer_base.Unit_Code = Customer_History.Unit_Code
    And customer_base.Customer_no = CustomeR_History.Customer_no"

    About statspack report. Ok i run it. but I don't think so its give me any help, because its a fresh table in a fresh database. let me check.

    Noor Ali

    Quote Originally Posted by gbrabham
    Try running statspack and take a snapshot of the insertion process.
    This will show you where the "WAITS" truely are. Do you have other constraints that are having to be checked for each insert ?? Do you have
    triggers that are firing ?? What is the insert statement that you are using ..
    What version of Oracle are you running?

    Gregg

  6. #6
    Join Date
    Apr 2003
    Location
    Greenville, SC (USA)
    Posts
    1,155
    Definitely run statspack to see where the waits are ...
    I'm also interested in seeing explain plan for :

    sql> set autotrace trace
    sql> set timing on
    sql> SELECT
    Index_Table_Cinda.Customer_No_Ten||
    row_number() over (partition by customer_history.customer_no order by customer_history.customer_no, customer_history.bill_month) BILL_ID
    ,'BILL-MTH' CHAR_TYPE_CD
    ,'1' SEQ_NUM
    ,'100' VERSION
    ,to_Char(Customer_History.Bill_Month,'MMYYYY') CHAR_VAL
    ,' ' ADHOC_CHAR_VAL
    ,' ' CHAR_VAL_FK1
    ,' ' CHAR_VAL_FK2
    ,' ' CHAR_VAL_FK3
    ,' ' CHAR_VAL_FK4
    ,' ' CHAR_VAL_FK5
    ,' ' SRCH_CHAR_VAL
    FROM
    DUAL2
    ,CUSTOMER_HISTORY Customer_History
    ,CUSTOMER_BASE Customer_base
    ,FIELDSFORCCB FieldsForCCB
    ,INDEX_TABLE_CINDA Index_Table_cinda
    Where Customer_Base.Unit_Code = Index_Table_Cinda.Unit_Code
    And Customer_Base.Customer_no = Index_Table_Cinda.Customer_no_Nine
    And Customer_Base.Unit_Code = FieldsForCCB.Unit_Code
    And Customer_Base.Customer_no = FieldsForCCB.Customer_no
    And customer_base.Unit_Code = Customer_History.Unit_Code
    And customer_base.Customer_no = CustomeR_History.Customer_no"

    Gregg

  7. #7
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    Use the APPEND hint as it should help especially with your redo. If in doubt try it out and see if it makes a difference.

    Other things to do include
    1) schedule it at a quiet time.
    2) Run groups of statements in parallel.
    3) Are the two schemas your transferring data between on the same server and instance? If not copy the temp table over to your prod database and then try doing the insert ino the production table.

    Alan

  8. #8
    Join Date
    Dec 2003
    Location
    Karachi, Pakistan
    Posts
    45

    explain plan

    Here I copy the explain plan of my query

    Execution Steps:

    Step # Step Name
    14 INSERT STATEMENT
    13 WINDOW [SORT]
    12 NESTED LOOPS
    10 NESTED LOOPS
    7 MERGE JOIN [CARTESIAN]
    4 MERGE JOIN [CARTESIAN]
    1 CCBINTERIM2.CUSTHIST_BMUNTCUST INDEX [FAST FULL SCAN]
    3 BUFFER [SORT]
    2 STGADM.DUAL2 TABLE ACCESS [FULL]
    6 BUFFER [SORT]
    5 CCBINTERIM2.INDEX_TABLE_CINDA TABLE ACCESS [FULL]
    9 CCBINTERIM2.CUSTOMER_BASE TABLE ACCESS [BY INDEX ROWID]
    8 CCBINTERIM2.CUSTBASE_UNTCUST INDEX [RANGE SCAN]
    11 CCBINTERIM2.FFCCB_UNTCUST INDEX [RANGE SCAN]

    Step # Description Est. Cost Est. Rows Returned Est. KBytes Returned
    1 This plan step retrieves all of the ROWIDs of B*-tree index CUSTHIST_BMUNTCUST by sequentially scanning the leaf nodes. 4 1 0.018
    2 This plan step retrieves all rows from table DUAL2. 2 1 --
    3 This plan step sorts the buffer row source. 2 1 --
    4 This plan step accepts two sets of rows and builds the set of all possible combinations of row pairs. The result set grows exponentially with the size of the row sets joined. 6 1 0.018
    5 This plan step retrieves all rows from table INDEX_TABLE_CINDA. 9 18,845 423.276
    6 This plan step sorts the buffer row source. 13 18,845 423.276
    7 This plan step accepts two sets of rows and builds the set of all possible combinations of row pairs. The result set grows exponentially with the size of the row sets joined. 15 1 0.04
    8 This plan step retrieves one or more ROWIDs in ascending order by scanning the B*-tree index CUSTBASE_UNTCUST. 2 1 --
    9 This plan step retrieves rows from table CUSTOMER_BASE through ROWID(s) returned by an index. 3 1 0.016
    10 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause. 18 1 0.056
    11 This plan step retrieves one or more ROWIDs in ascending order by scanning the B*-tree index FFCCB_UNTCUST. 2 1,965,430 24,951.748
    12 This plan step joins two sets of rows by iterating over the driving, or outer, row set (the first child of the join) and, for each row, carrying out the steps of the inner row set (the second child). Corresponding pairs of rows are tested against the join condition specified in the query's WHERE clause. 20 1 0.068
    13 This plan step performs a windowing analytical function on a set of data which needs to be first sorted based on some criteria. 23 1 0.068
    14 This plan step designates this statement as an INSERT statement. 23 1 0.068


    Quote Originally Posted by gbrabham
    Definitely run statspack to see where the waits are ...
    I'm also interested in seeing explain plan for :

    Gregg

  9. #9
    Join Date
    Dec 2003
    Location
    Karachi, Pakistan
    Posts
    45

    I try append

    Hi

    Ok I execute one statement with APPEND clause. let see what is the result.

    about yr other things let answer one by one

    1. Yes at the time of executing my statements no other program executed.
    2. I do three statements parallel.
    3. Both schemas are in the same server.



    Quote Originally Posted by AlanP
    Use the APPEND hint as it should help especially with your redo. If in doubt try it out and see if it makes a difference.

    Other things to do include
    1) schedule it at a quiet time.
    2) Run groups of statements in parallel.
    3) Are the two schemas your transferring data between on the same server and instance? If not copy the temp table over to your prod database and then try doing the insert ino the production table.

    Alan

  10. #10
    Join Date
    Apr 2003
    Location
    Greenville, SC (USA)
    Posts
    1,155
    I will assume that the tables you are reading from are quite large. How big
    is your buffer cache? What are the waits from statspack. My guess is that
    you are reading in and flushing out the buffer cache quite frequently. Also
    are you using manual or automatic storage management. Is the table that you are inserting into in little fragments ??? If you are using manual, what
    are the freelist and initrans ??? How about the initial and next extent sizing?
    How about the size of the tablespace itself and is it extending in little pieces ?

    The query is not too bad ... Ensure that the driving table will return the smallest number of rows first (return the smallest result set first) ...
    The cartesian joins might need to be looked at .. (dual is ok)

    HTH
    Gregg

  11. #11
    Join Date
    Jul 2003
    Posts
    2,296
    remember to set the table to NOLOGGING if you want to do a direct insert (ie: append). Otherwise you will not see any performance gain.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •