Results 1 to 3 of 3
  1. #1
    Join Date
    Jan 2011
    Posts
    1

    Unanswered: How to start primary key analysis of an existing datamodel

    Hi,
    I am working in a insurance related project using DB2 mainframe database. It's 5 year old sytem. We store policy and coverage information in different tables. We have near about 30 tables to store all informayion and coverage related table holds the most information. Because for each transaction of a praticular policy includes on an average 6 coverages. So coverage informations are increased over time and now it stores 24 million records. Now because of this huge volume, sytem speed has degraded over time. So we are planning to make it more sclable. So we need to start primary key anaysis for whole data model. Please suggest me how should i start the analysis. Please post all you suggestions to improve the performance from database perspective. Recently we are moving to DB2 v9 version also. Will it help any way.
    Thank you.

  2. #2
    Join Date
    Jul 2009
    Posts
    10
    If one "huge" table of 24 million rows is causing you performance problems, it is probably too late to start analyzing primary keys. A better idea may be to find someone who knows how to tune DB2


    http://www.db2topgun.com

  3. #3
    Join Date
    Dec 2008
    Location
    Toronto, Canada
    Posts
    399
    make this table partitioned...
    DB2 9.5/9.7 on Unix/AIX 6.1/Linux

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •