Results 1 to 6 of 6
  1. #1
    Join Date
    Feb 2004
    Posts
    45

    Unanswered: tuning redo/undo for high insert activity

    Please feel free to laugh at the clueless DBA who is not thinking clearly.

    I have to design an Oracle 9 database in which 3 tables will experience a high insert rate, estimated to be 32,000 - 35,000 inserts per second, 24 x 7. In addition, a large, but widely varying number (200 - 4000) selects will be performed against these tables every hour, and will primarily select a significantly large subset of the rows inserted within a 1-2 second time frame.

    It's very unlikely that the rows selected will still be in the redo buffers at the time of selection, so a significant amount of database activity will occur constructing consistent reads. This makes me think that increasing the number of redo files, and distributing them across a large number of disks will help improve responsiveness for constructing consistent reads, but that is the point of My posting: Can/does oracle use seperate redo files to constuct multiple consistent read views, or are they all read in somewhat parallel fashion (resulting in a slowdown due to increasing the disk activity) ?

    This sounds like something I should know (and probably would if I was thinking clearly this early on Monday morning).

    Thanks in advance.
    It was working just 5 minutes ago - I promise !

  2. #2
    Join Date
    Aug 2003
    Location
    Where the Surf Meets the Turf @Del Mar, CA
    Posts
    7,776
    Provided Answers: 1
    HUH?
    AFAIK only 1 redo logfile is ever "active" at any point in time; until a switch occurs.
    AFAIK it is the rollback segments which provide the read consistent view of data.
    Please correct me if I am wrong.
    You can lead some folks to knowledge, but you can not make them think.
    The average person thinks he's above average!
    For most folks, they don't know, what they don't know.
    Good judgement comes from experience. Experience comes from bad judgement.

  3. #3
    Join Date
    Feb 2004
    Posts
    45
    anacedent, I think you may be uncovering a little of My fog-headedness. Consistent reads, as I understand them, are accomplished using undo (which are also used for rollback). Log files (seperate thing) are created when the db is created and can be in groups (i.e. multiplexed) and are used for recovery.

    However, I still have a lot of consistent reads to support, and would appreciate ideas on making the construction of consistent reads quick and efficent.

  4. #4
    Join Date
    Mar 2002
    Location
    Reading, UK
    Posts
    1,137
    1) Keep you redo logs on single non striped disks (see www.ixora.com.au for more info). And make sure you dont have a small redo buffer (i.e. less than 512K ). Striping the undo datafiles should help aswell. Avoid RAID5 at all costs.
    2)You may find the last blocks in the table have heavy insert activity aswell as select activity so it may become hot. The best way to reduce the impact of this is to partitioning the data in such a way that the insert activity is spread over a number of partitions. You can also help by setting initrans higher, also having larger values for pctfree will reduce the number of rows in the hot blocks (though I/O may suffer).
    3) If you have an index on a key which is based on a sequence you may find the hot blocks at the end of the index tree. You can solve this by reversing the byte order in the index (see create index in the sql manual).

    The most important thing is to have a test bed where you can simulate the load against a database (which ideally should be as close to your PROD database as possible) so you can try out all of the options (and there are a lot under Oracle).

    Alan

  5. #5
    Join Date
    Feb 2004
    Posts
    45
    Alan, thanks for all the suggestions. Do you have any advice inparticular on how to improve the construction of consistent reads ?
    It was working just 5 minutes ago - I promise !

  6. #6
    Join Date
    Mar 2004
    Location
    Colorado
    Posts
    49
    If you are doing 32,000 inserts per second (I am assuming your querries are summary reports) and commiting then the query only has to do a read consitant view of the blocks that had been inserted into after the query started (Newer SCN) to get the before image of those blocks. So I would first make sure you have a large amount of undo space (not redo) to ensure you do not get the snap shot to old error. (Remember your undo/rollback segment hold the information for read constient blocks not the redo logs) One other thing you can try is create a recycle pool Oracle will push all the CR blocks to that pool. This may not help the query in question but help other parts of your application by not aging out important data with CR blocks.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •