Results 1 to 6 of 6
  1. #1
    Join Date
    Sep 2006

    Question Unanswered: Snapshot too old

    How can a query that extract data to a flat file from a trasactional table fill the UNDO area when the data that the query extract is historical with almost no modification??

    I undertand that the UNDO area make a safe copy of a record to the reader query if this record was modified berore this query ends, but if the modification over readed record are very few, and the UNDO area are big enoght and even so the UNDO area are running out, then I understend the logic very bad!

    Someone can explain me this beter??

    Thanks and Regards!

  2. #2
    Join Date
    Aug 2003
    Where the Surf Meets the Turf @Del Mar, CA
    Provided Answers: 1
    visit http:/ & do keyword search on ORA-01555
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  3. #3
    Join Date
    Jan 2004
    Croatia, Europe
    Provided Answers: 4
    This doesn't have to make much sense in your case, but - as you didn't provide actual code you use - perhaps there's a loop which does some processing and you are committing after every n records. If so, try to avoid that; frequent committing can also cause ORA-01555 error to be raised.

  4. #4
    Join Date
    Feb 2005
    hi if you use orace 8x or 9x. you can try


  5. #5
    Join Date
    Sep 2006


    Thank for the replies, I'm still a littly confuse I'm not an expert on Oracle, and I am not the DBA, which, in my case, dont want to do any modification in the production enviroment, the extraction is doing throw a ODBC DataDirect driver (Oracle Wire Protocol) and just execute a reading statement (select .... from ... where ...) the data generate I expect will be arround 5 GB...

    Any other clue??


  6. #6
    Join Date
    Jul 2003
    when you write to a flat file (or issue a large query) ,especially a LARGE amount of data, oracle is
    actually writing a SNAPSHOT of your data at the time of execution.

    So, oracle needs to determine and hold onto that snapshot while trasactions
    are processing. The larger your extract and the longer it takes PLUS if your
    realtime transactions are heavy, then you will run into the situation you
    are describing.

    Did that make sense to you?

    example: if you execute a select count(*) from a very large table
    with high transacitonal inserts at 12:01
    then you are going to get a count as of 12:01 but maybe by the time the
    actual query finishes counting the time might be 13:35
    within that timeframe maybe another million rows got inserted.
    however, oracle is going to give you a count of rows as of 12:01, but internally oracle needs to keep track of both inserts and your snapshot
    - The_Duck
    you can lead someone to something but they will never learn anything ...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts