Results 1 to 5 of 5
  1. #1
    Join Date
    Jun 2003

    Post Unanswered: Oracle SAME Disk Configuration Experience

    I am deploying a 9i database with a Sun server and two 800 GB arrays. After struggling with disk configuration issues on past deployments, I am considering using Oracle's SAME (Stripe And Mirror Everything) technique to provide high performance and eliminate lot lot of administration hassle going forward.

    Has anyone deployed a SAME configuration and share their experience?


  2. #2
    Join Date
    Mar 2004
    Corona, CA

    Question Oracle SAME Disk Configuration

    I have the same questions. The vendors are pushing in this direction but I have many concerns.

  3. #3
    Join Date
    Jul 2003
    If you have an array on 1 mount point then there is no point in
    making a bunch of filesystems because they all have to go through the
    same mount point in the end.

    If you have multiple mount points, then I would suggest seperating
    your system tablespace from your data and your indexes and redo, etc.

    Most new systems (i believe) only have one moint point for the disk-array.

    I wish I had made just one huge filesystem when I had the chance.
    I definitely made it more complicated than necessary.

    I feel a good practice (lacking mount points) would be to have one
    huge filesystem (striped, etc.) for all your datafiles. Then break out
    your db data into smaller datafile/tablespace groups. Such as large/medium/small index datafiles, large/meduim/small table datafiles, etc.

    Oracle seems to perform well when I can seperate the data into different
    datafiles/tablespaces (regardless of where they are located on the server).
    Last edited by The_Duck; 03-04-04 at 16:43.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  4. #4
    Join Date
    Mar 2002
    Reading, UK
    From an Oracle 8i performance Tuning paper


    The Stripe & Mirror Everything (SAME) Myth

    We are ABC Corporation and we are the “gods of disks.” Don't worry about separating the various files of your Oracle database, just create one huge logical volume with all the disk drives available and store all of the files there. This makes I/O management very simple and reduces hotspots in your database.


    Okay, we have heard this claim with uncanny frequency, but realize that real-life applications can behave quite differently from vendor benchmarks. It should be noted that no two implementations of the same application would be the same. This is due to the level of customization the application requires to suit the business's requirements. This phenomenon is especially true for third-party packaged applications that are comprised of many thousands of tables and indexes. Practical experience suggests that only in very exceptional cases does the concept of SAME – the method of making one logical volume with all your disks really work. The reason it is not optimal stems from how most applications perform I/O. If your applications constantly perform operations that involve significant index scans (typical in batch jobs) on one or more large tables in your database, the method mentioned above can cause significant I/O performance problems.

    There are 2 core issues that require understanding. They are:

    a) The various phases that make up an I/O request

    b) How do index scans impact the I/O sub-system?

    An I/O request contains 3 phases – Seek, Latency and Transfer rate. The seek phase which is the time it takes for the disk drive’s head to align to the correct disk location, usually accounts to 40-60% of each I/O request (based on an average of 4ms - 6ms for standard I/O requests whose response times are 10ms) and thus every effort to reduce this cost will increase I/O performance. The effort involved here is not so much geared towards “reducing the number of seeks”, but instead to reduce the cost of each seek operation and thus provide consistent I/O throughput. If every database component (including the DATA and INDX datafiles) are located in the same set of disks, the cost of performing seeks will be very high, thus causing increased queuing for the devices, followed by increased wait time for I/O requests to complete.

    When a SQL statement uses an index to execute a query, it reads one or more blocks of the index into the database buffer cache, based on the value of the indexed column referenced in the where clause of the SQL statement. The rowid(s) that match the value that is searched (in the index) are then used to read the data from specific block(s) of the table in question.

    When significant index scans are performed (and assuming that not all blocks are cached), the index blocks and the data blocks of the table need to be constantly read in single-block reads. The real problem here is the need to seek different locations on disk to service I/O requests for index and data blocks. Balancing the number of seeks on a system (where different blocks of data from multiple tables and indexes are accessed concurrently) should be paramount in any effort that attempt to eliminate I/O bottlenecks, as this will reduce the cost of a seek operation.

    The biggest downside with the SAME methodology is that it generalizes I/O access patterns of redo log files and datafiles to be the same, hence the recommendation to put them in the same bucket. It further assumes that disk drives are small in size and available in plenty. The second assumption is becoming part of our storage utopia. Vendors are consistently introducing new disk models with double the storage capacity of its predecessor, without necessarily doubling the throughput of the prior model. Some storage vendors have already started touting 144GB disk drives as their standard, making it harder than ever to balance I/O access patterns and alleviate I/O bottlenecks.

    Among other issues, the SAME method does not cater well for systems that require varying levels of RAID for different I/O access patterns. The support for Parallel Query and Database Partitioning is very inadequate. Lastly, the failure of one disk drive affects every component in the database, especially in a RAID 1+0 environment as this reduces the I/O throughput capacity by 50%. Talk about a single failure affecting everything.

  5. #5
    Join Date
    Jun 2003


    I was looking at deploying SAME for our 1TB data warehouse with lots of unpredictable ad-hoc queries rather than much more predictable queries of a production operational or OLTP system.

    I'm not an expert on SAME, but my recollection from the white paper from Oracle on SAME, it acknowledged that for any one query, SAME may not yield the performance that a perfectly laid out configuration would provide, but that they were targeting aggregate performance. SAME would allow near optimal performance for all IO, particularly when you are using parallel processing - if I have a parallel query running on 8 or 16 CPUs, it seems reasonable that if I can initiate IO across all the 4 channels and 22 disks rather than trying to push/pull it all from one or two disks, I'm ging to be better off. The goal being to provide a configuration that performs reasonably well all the time and does not require a lot of DBA time planning disk layouts, tablespaces, etc.

    To your point on rollback and redo - I think you are right, you do not want those guys on the big data volume and I have seen others make the comment that they agree with the the general principal of SAME but the database internals should reside elsewhere, even recommending they be placed on local drives on the server if possible rather than an external array.

    I think disk failure in a SAME environment is a legitimate question. I think it is probably more of an issue in a OLTP environment where there is not a lot of excess performance capacity and that a 50% reduction in IO performance from a disk failure would be big deal. I see it as being less of an issue in a data warehouse environment where the workload might be a little more flexible. I would happily trade off 4 - 8 hrs of peak performance once or twice a year in exchange for a storage configuration that provided a high level of performance overall, and more importantly, doesn't take a lot of my time designing it and caring for it on an ongoing basis.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts