Results 1 to 5 of 5

Thread: DPF Design Help

  1. #1
    Join Date
    Nov 2007
    Posts
    27

    Unanswered: DPF Design Help

    Hi,

    One of my application is currently using EE DB2 database Version 9.1 . We are are now planning to enhance this database wherein the database size is expected to grow to 5 TB. Consequently we are planning to move to a DPF system.

    Existing Schema in short

    The scehma is mostly use for reporting purpose. We have 3-4 major tables(all with 80-90 columns) with volume around 400-500 million(as of now) for each table. All the reports are driven by account which is present in all the tables.But we need to join all these tables with PRODUCT table(on a bigint id) which is a master table having all our products with all the corresponding reference data.

    Question:
    We are planning to use the account as the distribution key for 3 main tables.
    How can we distribute the product table, which does not have account so that the joins are collacted ?
    Product table is around 4 million and is updated(inserted and updated) on a daily basis. We tried replicated MQT but it did not work due to heavy inserts and updates. We also thought of denormalizing the main table and including the product attributes over there, but it was not feasible due to size and the frequent update and inserts.

    Any help is appreciated.

    Thanks,
    Rajesh

  2. #2
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Did you try creating the MQT as a subset of the real table, with only the columns you frequently need for reporting? Also, careful selection of the index may allow "index only" access to the table for most reports.

    I would put the MQT table in a separate bufferpool from the other tables to help out on the frequent insert and update activity.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  3. #3
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5
    Hi Rajesh,
    What is the problem you are trying to fix? Slow response in application/report? We might be able to help you better if you gave some background info on what you are doing/what is wrong/etc... The partitioning sounds good based on the size of your 'account' tables, though you seem to be concentrating on this rather small table.

  4. #4
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by dav1mo View Post
    Hi Rajesh,
    What is the problem you are trying to fix? Slow response in application/report? We might be able to help you better if you gave some background info on what you are doing/what is wrong/etc... The partitioning sounds good based on the size of your 'account' tables, though you seem to be concentrating on this rather small table.
    It appears that he is trying to elimiate cross-partitions to the product table, which is 4 million rows. He has already tried to distribute the product table via MQT to all partitions, but is having performance problems with the level of inserts and updates to the product table.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  5. #5
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    One other suggestion: If the level of inserts and updates to the product table is causing lock contention on the user queries, then have the queries run using WITH UR at the end to eliminate any locking problems. Also, make sure that LOCKLIST is sufficiently large enough and MAXLOCKS 30 or less (I would not recommend STMM be used to control LOCKLIST).
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •