Results 1 to 8 of 8
  1. #1
    Join Date
    Jan 2004
    Posts
    9

    Unanswered: ORA-01555: snapshot too old

    Is it a good or a bad idea to do delayed commits....Something like
    Code:
      if counter > = 1000 then
        commit;
        counter := 0;
       else
         counter := counter +1;
       end if;
    I have read both ways about why to use or not to use this type of a commit. It would be great to get your opinion about it.

    Regards,
    Sumit

  2. #2
    Join Date
    Apr 2003
    Location
    Minneapolis, MN
    Posts
    273
    For OLTP --> NO

    For DSS --> YES

    But If, You are writting this code just to avoid ORA-01555, then I guess there are better ways to do it then this, like, explicitely assigned large rollback segments (or USING DBMS_TRANSACTION pkg) to this transaction, or may be in oracle 9i, UNDO_RETENTION.


    HTH
    Bhavin

    MS Computer Science
    OCP DBA 9i/8i

  3. #3
    Join Date
    Sep 2003
    Location
    Milan, Italy
    Posts
    130
    IMHO, the transaction should be committed when you have a new consistent image of the database, and want to make it permanent. Not very different from the "save" button on a word processor, that saves your changes when you are happy with it.

    If you want to avoid ora-1555, the best solution is simply to have bigger (or more) rollback segments, or setting the undo_retention as Hings suggests if using automatic undo management . That makes your code simpler and so less buggy (more complexity -> more bugs), not to mention easier to mantain and to write.

    And, in your case, committing more frequently may make ora-1555 more likely - depending on which undo segment you are using and which ones the other transactions are using.

    Obviously there are always exceptions - for example in one occasion i needed to update a table which was concurrently modified by web users. Best thing (see first paragraphs) would have been to commit after the last modified row, but doing so means locking the rows and so making the users wait - so i make the opposite, and committed after EACH rows. In this case, i had no choice.

    Remember that when you commit:
    1) you wait for the DISK write of your redo logs by LGWR
    2) you generate more redo logs (for the commit marker itself)
    so it is a relatively expensive operation since involves a disk wait (when you modify a row you don't wait for the block to be written on disks, that would be done later by DBWR, so it's a fast memory operation; and the redo logs are written to the log buffer, again a memory operation).

    HTH
    Al

  4. #4
    Join Date
    Jan 2004
    Posts
    9
    We are using an OLTP Environment. But for this application the user collects data only in a lot of 1000 rows at a time. So it is really no use commiting data after each insertion in the database. I was getting the ora 01555 error when I was commiting after each insertion and avoided it when I used the above code.
    I had also requested my DBA to increase the UNDO_RETENTION from 3 hours to 6 hours. But what I feel is by increasing the UNDO_RETENTION we are only delaying the problem to a later date and that we may get this error again when the load is more. Is that true?

  5. #5
    Join Date
    Sep 2003
    Location
    Milan, Italy
    Posts
    130
    Originally posted by sumit192
    We are using an OLTP Environment. But for this application the user collects data only in a lot of 1000 rows at a time. So it is really no use commiting data after each insertion in the database. I was getting the ora 01555 error when I was commiting after each insertion and avoided it when I used the above code.
    I had also requested my DBA to increase the UNDO_RETENTION from 3 hours to 6 hours. But what I feel is by increasing the UNDO_RETENTION we are only delaying the problem to a later date and that we may get this error again when the load is more. Is that true?
    how long does it take to collect 1000 rows ? More or less than 3 hours ?

    UNDO_RETENTION is just a target, Oracle may not be able to keep the undo segments for the time you ask for. You should enlarge the UNDO tablespace at the same time, to (try to) avoid ora-1555.

    Are you reading from the same table you are inserting to ?

    bye
    Al

  6. #6
    Join Date
    Jan 2004
    Posts
    9

    Reply

    How long generally depends on the state of the database. If there is less load collecting those 1000 rows may take 1-2 hours but if the load increases it may take 5 but these figures are only with the data that I am dealing with right now and I expect them to deteriorate as the size of the database increases.

    Yes I am inserting and then reading and deleting(the rows that I just read) from the same table.

    Regards,
    Sumit

  7. #7
    Join Date
    Sep 2003
    Location
    Milan, Italy
    Posts
    130

    Re: Reply

    Originally posted by sumit192
    How long generally depends on the state of the database. If there is less load collecting those 1000 rows may take 1-2 hours but if the load increases it may take 5 but these figures are only with the data that I am dealing with right now and I expect them to deteriorate as the size of the database increases.
    Are you sure you have not a faster way to collect the rows ? I would try to optimize this (if possible), this may solve your problem at once ...


    Yes I am inserting and then reading and deleting(the rows that I just read) from the same table.
    Ah, a "fetch across commit" perhaps ! Are you doing something like (in pseudo-code)

    for r in (select * from t) loop
    delete from t where pk = r.pk;
    insert into t ...
    commit (or commit after 1,000 rows read or inserted)
    end loop;

    Ora-1555 is the typical outcome of this ... best thing would be to commit outside the loop, that would avoid ora-1555 but you risk making the undo tablespace full.

    More info of what you are trying to do may be useful to look for a better solution...

    HTH
    Al

  8. #8
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    BAD, very bad.
    It gains NOTHING.
    It is wasteful & can result in errors.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •