Results 1 to 8 of 8
  1. #1
    Join Date
    Jan 2004
    Posts
    99

    Unanswered: problem with deleting

    the following delete statement bombs out the rollback tablaspeace, I understand that by using a NOT the optimizer is doing a full table scan.

    How can I write this query to be in a optimized form? the table has currrently 2 million rows, and will grow to 10 million.

    any suggestion/ideas welcome?

    thanks.



    delete tracking.AT_Search where homeid NOT IN (#idlist#);


    ***** Explain Plan ************

    Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop

    DELETE STATEMENT Hint=CHOOSE 2 M 540
    DELETE AT_SEARCH
    TABLE ACCESS FULL AT_SEARCHi 2 M 32 M 540

  2. #2
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1

    Re: problem with deleting

    The amount of rollback used has nothing to do with the optimizer plan (full scan v. index), it is a function of the number of rows you are deleting, which is presumably large in this case.

  3. #3
    Join Date
    Jan 2004
    Posts
    99
    Yes your correct the rollback is bombing out because of the large amount of data, how can I optimize the query to prevent this from occuring?

  4. #4
    Join Date
    Jul 2003
    Location
    England
    Posts
    152
    you could try a different tact - copy out the rows you want to keep to a temp table, truncate the original table, and then either rename the temp table, or copy the rows back in again.

    Different way to achieve the same thing ... (and a lot less logging!)
    Regards
    Dbabren

  5. #5
    Join Date
    Jan 2004
    Posts
    99
    hmm not sure on creating a temp table on this particular query, is there any way to optimize this query?

  6. #6
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    If you intend to delete X rows from the table, then no amount of "query optimization" will help. Dbabren's suggestion is good, but if you don't want to go with that then other alternatives you have are:

    1) Increase available rollback so that the delete doesn't fail

    2) Break the job into smaller pieces, for example:

    PHP Code:
    begin
      
    for r in (select homeid from <home tablewhere homeid NOT IN (#idlist#)
      
    loop
        delete from tracking
    .AT_Search where homeid r.homeid;
        
    commit;
      
    end loop;
    end

  7. #7
    Join Date
    Jan 2004
    Posts
    99
    thanks!!!! for all your help....I will play around with the suggestions and see what comes up!!

  8. #8
    Join Date
    Apr 2003
    Location
    Greenville, SC (USA)
    Posts
    1,155
    As stated, try to break up the job into smaller pieces or

    Create a LARGE rollback segment (ie, 200m with next extent 25m)
    ... Ensure your rollback tablespace is set to handle this with autoextents

    Use this rollback segment prior to your delete statement

    sql> set transaction use rollback segment rbs_large;

    HTH
    Gregg

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •