Results 1 to 2 of 2
  1. #1
    Join Date
    Aug 2003

    Exclamation Unanswered: Performance Optimization


    We're making a new database and we’re concerned with one particular table performance.

    It's estimated that the table will begin by having have 350 000 000 rows with a average yearly growth of 5%

    We’re planning on creating something like 642 table partitions. (the partitions will be created according to the way we will retrieve the data from the table).

    We have 2 choices that we’re considering.

    1. Create a variable length row size

    2. Create a fix length row size
    - We will lose storage space but that is not a problem
    - We will make our development more complex but will trade it for better performance

    What option should we choose??

  2. #2
    Join Date
    Mar 2002
    Reading, UK
    What do you mean by fixed and variable length row size?

    The other thing is to think very carefully about partitioning options, do you want to maintain 642 partitions? Dont forget that partitions dont all have to be the same size, for time related data we use a combination of yearly partitions for old data and monthly partitions for newer data. Also consider archiving strategies you could use a larger but slower storage medium to archive off older data but have it still available. It also helps to automate the procedures for creating new partitins and archiving off older partitions.


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts