Results 1 to 4 of 4
  1. #1
    Join Date
    Dec 2003

    Unanswered: Table size question

    I've got a table with 180 million rows and 19 columns.

    There may be a need to add an additional 250 columns(all of them decimal (7,2).

    how prohibitive would this be from a performance standpoint?

    Many Thanks,


  2. #2
    Join Date
    Oct 2003
    "Add 250 columns?" I don't think so! Database tables should never be "wide." Definitely, certainly, absolutely not that wide!

    180 million rows is definitely "large enough to be interesting," but by no means will it come close to exhausting the capacity of the system. Your main consideration is that you'll have to plan to do everything in small-pieces. For instance, load 10,000 rows at a time, commit between each one, and repeat the process 18,000 times. Sort the rows into ascending order before loading them in. Create indexes only after all rows are loaded. Drop indexes and constraints during massive operations then re-add them. And so on.

    The disk-I/O subsystem for this computer should be very fast, and the machine should have lots of memory. A machine that is responsible for controlling a large database should be essentially dedicated to that task.
    ChimneySweep(R): fast, automatic
    table repair at a click of the

  3. #3
    Join Date
    Dec 2003
    I agree with sundialsvcs. 250 columns sounds a bit wild, especially considering they are all the same data type. It sounds to me like you need to normalize this data by using a child table. What about making a child table with the following columns:

    PK Columns (the columns to carry the primary key from the parent table)
    col_type Char(3) (the column to describe the type for this row)
    col_value Decimal(7,2) (the column to hold the value)

    Now you will turn 250 columns into, say, 5 or so, depending on the width of the primary key.

    I don't know your exact situation, but it's a thought...


  4. #4
    Join Date
    Jan 2003
    Provided Answers: 11
    TECHNICALLY: This can be done. SQL Servers limitations are as follows:

    1024 columns per table (you have 269 under this plan)
    8060 bytes per row (your added 250 columns add on 1250 bytes)

    So, assuming your original 19 columns take up less than 6810 bytes in total, you CAN do it.

    As far as performance goes, get ready for a good long wait. This operation will cause literally a million page splits. If you have a clustered index on the table already, then this will very likely compound the page split problem. Since you will be adding nearly a million pages, you will want to make sure both your log and data files can take the added load. If my numbers are right (1,000,000 pages * 8KB /page), this will be about 8GB of space for the data alone. For insurance, see if you can get 16GB. Especially, since I am not taking into consideration the indexes on the original 19 columns.

    Now that you know it can be done, the big question is should you do it. As for myself, I would not.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts