Results 1 to 3 of 3
  1. #1
    Join Date
    Dec 2003
    Posts
    8

    Unanswered: insert into large partitioned table takes a long time

    We are running Oracle 9.2.0.3 on Sun Solaris 2.9

    I have a huge table. First range partitioned by price_date, then hash partitioned by fund_id. The table has 430 million rows. Every day I have a batch job in which insert 1.5 million to 3 million rows, followed by some updates. Insert & updates are built in dynamic SQL. The insert takes a long long time. Ideally, if I drop the index, inserting 3 millions takes 9 minutes. However, since there are updates immediately follow the inserts, I can't really drop index.

    I use local indexes.

    Does anyone has experience in making insert into large table run faster?
    Does anyone have any suggestions?

    Qingbo

  2. #2
    Join Date
    Dec 2003
    Posts
    8

    we can use append hint

    We use data guard. We need to keep all the redo. So we can't use /*+ append */ hint.

  3. #3
    Join Date
    Jul 2003
    Posts
    2,296
    there are good articles by Tom Kite on this subject.
    look up asktom.com

    You have many options.
    using a temporary table with nologging (article about it above).
    drop index, load, recreate index, update data
    create temp_table nologging as select /*+ APPEND */ * from real_table. then load data, update, etc.


    or load data into temp_table with nologging, update that table with your update statement. then update your REAL table, THEN insert /*+ APPEND */ into real_table as select * from temp_table;
    - The_Duck
    you can lead someone to something but they will never learn anything ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •