Results 1 to 10 of 10
  1. #1
    Join Date
    Sep 2013
    Posts
    2

    Unanswered: DB2 Storage design

    We have an IBM v7000 SAN. It has 24 physical disks. Each physical disk is 136GB in size. The disks are grouped into 3 RAID 5 arrays and assigned to one storage group and hence one volume. This volume is assigned to the AIX host. The AIX admin can now create single or multiple hdisks from this volume for DB2 use.

    Env: DB2 v9.7 fp7, on AIX 7.1

    The question is, should we have
    - one large hdisk1 for example, create 3 filesystems on this, one each for db2 data/temp/log filesystems on this one hdisk1 (underneath is 24 physical disks)
    - multiple hdisks, for example, hdisk1, 2, 3 and create db2 data filesystem in hdisk1, logs in hdisk2 and temp in hdisk3. (underneath is the same 24 physical disks)

    What is better? Is there any advantage to either approach?

    Thank you

  2. #2
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Neither of those two options. I would create three separate mount points, one for each array. Don't do anything else on the AIX side. Then for DB2, create the database and assign the three mount points to automatic storage. Use automatic storage for all tablespaces.

    For DB2 transaction logs, preferably these should be on different disks, so use internal server storage for these. It would be OK to use one of the disk arrays for transaction logs (and then have only two disk arrays for automatic storage), but you don't need that much space for transaction logs and that would be a waste.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  3. #3
    Join Date
    Sep 2013
    Posts
    2
    Thanks Marcus. However, if i don't have a choice to redefine storage (stuck with the one volume and 24 disks underneath it), then would it make sense to carve out multiple hdisks from the same storage volume?

  4. #4
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by sundariyegna View Post
    Thanks Marcus. However, if i don't have a choice to redefine storage (stuck with the one volume and 24 disks underneath it), then would it make sense to carve out multiple hdisks from the same storage volume?
    There are advantages and disadvantages:

    Advantage: When you have multiple mount points defined to automatic storage, then you can sometimes get better disk parallelism with certain activities. It will not help much with most SQL for OLTP applications.

    Disadvantage: There may be too much overhead in over-stripping the disks. The disks are stripped by RAID controller, and if multiple mount points are defined to automatic storage they will be stripped by DB2. Also having AIX stripe them is overkill.

    These are somewhat theoretical discussions, since it may be hard to actually see any differences, especially if large bufferpools are utilized.
    Last edited by Marcus_A; 09-13-13 at 16:58.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  5. #5
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    Another advantage of separate file systems is that they will let you isolate temporary tablespace from the regular ones and from logs, which might save you from an outage from a runaway transaction.
    ---
    "It does not work" is not a valid problem statement.

  6. #6
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by n_i View Post
    Another advantage of separate file systems is that they will let you isolate temporary tablespace from the regular ones and from logs, which might save you from an outage from a runaway transaction.
    Assuming one is using Automatic Storage (which everyone should be using from now on for various reasons), how can that be accomplished? In V10.1, stogroups can be used with automatic storage to accomplish that (by creating a separate stogroup assigned to its own storage and used only for system temporary tablespaces), but the OP seems to be using 9.7.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  7. #7
    Join Date
    Jul 2013
    Location
    Moscow, Russia
    Posts
    666
    Provided Answers: 55
    It can be accomplished using a softlink.
    And I think it's not a big problem not to use an automatic storage for temporary tablespaces as well - 9.7 is not "puteScale available".
    Regards,
    Mark.

  8. #8
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    Quote Originally Posted by Marcus_A View Post
    Assuming one is using Automatic Storage (which everyone should be using from now on for various reasons), how can that be accomplished?
    That was the plan of the OP:
    create 3 filesystems on this, one each for db2 data/temp/log filesystems
    so ask him.
    ---
    "It does not work" is not a valid problem statement.

  9. #9
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by n_i View Post
    That was the plan of the OP:

    so ask him.
    I don't think OP was planning on using automatic storage, which would be a mistake IMO.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  10. #10
    Join Date
    Sep 2007
    Location
    Moscow
    Posts
    41
    Quote Originally Posted by sundariyegna View Post
    However, if i don't have a choice to redefine storage (stuck with the one volume and 24 disks underneath it), then would it make sense to carve out multiple hdisks from the same storage volume?
    Not sure for V7000 striped storagepool, but on DS3524 i run some test (simulate fullscan in intra_parallel mode) for I/O performance on RAID5 (8+1) array and can get twice performance of 4 LUNs/FS/storagepath over 1 LUN/FS/storagepath. But to get this is needed to spread 4 LUNs equally over DS3524 controllers and change related AIX hdisk's parameters to "reserve_policy=no_reserve" and "algorithm=round_robin" to utilize two FC-path's

    If your database have DWH/BI like workload or mixed workload with large havy query's i think you can try to implement 4 VDISK/FS/storagepath on your single V7000 storagepool and compare your database performance with only one VDISK/FS/storagepathl.

    For pure OLTP i do not thing what this 4 LUN config have big difference.

    Moreover, as previously mentioned placing LOGPATH and tempspace on the same storagepool/RAID/MDISK-set may be not so good idea, but ..."it depends"

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •