Results 1 to 14 of 14
  1. #1
    Join Date
    Dec 2008
    Posts
    9

    Unanswered: SQL 2000 Database growth enormous

    I have a database that is growing almost exponentially in size. It is on SQL Server 2000, and is limited to this particular database. We have thousands of databases and have never seen this behavior. Growth is set to 10%. Here's some details:

    I have an app that is taking several 5-6 MB text files and using INSERT statements to put the data into the database. This app has been used for years without issue on other databases (100% verified it's not the app). By about the 7th file, my database size went from 4 GB to 65 GB! I confirmed it's not the app because normal day to day use increases the database size enourmously as well - it was up to 10 gigs from 3 months of use, where another database (different site, but same database/app using it) is usually under 1 GB.

    Here is what I did to try and fix it:
    - Run a script that determines the table name, rows, reserved KB, data KB, index_size KB, unused KB.
    - Find the tables with the most unused KB. If it's a table without view dependency, foriegn key constraints, etc, I do a SELECT INTO another table.
    - Then I DROP the bad table and recreate it. Then do an INSERT INTO back into the new table. Everything is fine then - unused KB is minimal for that table.
    - Then I proceed with the shrink database, which will bring it from 65 GB to 45 GB. The next shrink will bring it from 45 GB to 35 GB. The next from 35 to 28. Each time I shrink, it cuts it, but the cut gets smaller and smaller.
    - When I run the query to see the usused KB, all of the tables' unused KB get smaller. However, some tables stay quite high (5-10 gigs).
    - I have tried running the DBREINDEX or INDEXFRAG commands on all tables, and am able to shrink the file a little more, but only in small increments at a time.
    - If I were to not perform a select into, and not run any index rebuilding or defragging, the database size shrinks to a fixed amount and that's it. I would only be able to shrink it from 65 to 58 gigs solely because of the 10% growth attribute.

    I also performed the following test:
    - Make a copy of the production database. Restore it into a test database. Shrink the database as best as I can with the methods above. Run the text importing application tool, and 265 MB of text files makes a 3 gig DB become ~3.8 gigs. That is what is supposed to happen, but is not happening in our production environment. I even got all users off the system, backed up production, deleted the database, created a new one, and restored the backup. Yet, the same problem keeps coming up.

    I'm out of ideas on what to do! Thanks for any advice.

  2. #2
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,799
    Provided Answers: 11
    What are the sizes of the LOG file and the DATA file?

  3. #3
    Join Date
    Dec 2008
    Posts
    9
    The log file is 1024 KB. The data file is always just about the sizes I mentioned below - 65 GB when the size of the whole database is 65 GB, 45 when 45, etc.

  4. #4
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    I gotta run, but I'm all for enormus growth....wait, that could be taken wrong

    you need to look at your batch processes....if it's noit high volume like tracking web hits

    Shrinking files is not always a good thing

    Think cold ocean water
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  5. #5
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,799
    Provided Answers: 11
    After every file you put in, check the sizes of the various tables to see where the extra space is being allocated to. I suspect a data problem in one of these files.

  6. #6
    Join Date
    Dec 2008
    Posts
    9
    When a few hundred inserts are done to specific tables, the data size grows normally and the unused size grows exponentially. I wouldn't have an issue with this if I could import all of the text files, then compact the database. But I don't have about 5 TB free on the server.

  7. #7
    Join Date
    Dec 2008
    Posts
    9
    MCrowley,

    How can I see where the space is being allocated to? I am using a script to simply view reserved, unused, data, etc. but I don't know "where" they're going physicially - is there a way I could check? Thanks.

    Here's an example of my script output:
    Table_Name rows reserved_KB data_KB index_size_KB unused_KB
    ------------------------- ----------- ------------------ ------------------ ------------------ ------------------
    Table_A 779285 16824008 KB 1873752 KB 390568 KB 14559688 KB
    Table_B 104299 2096008 KB 236232 KB 28832 KB 1830944 KB
    Table_C 75188 1719048 KB 142928 KB 42176 KB 1533944 KB


    I also thought that there could be a data problem with the files, but I imported all of these into a copy of this database and it is fine. I know though, that doesn't make sense at all.

    Also, here is SHOWCONTIG on the worst table:
    TABLE level scan performed.
    - Pages Scanned................................: 233513
    - Extents Scanned..............................: 231326
    - Extent Switches..............................: 231325
    - Avg. Pages per Extent........................: 1.0
    - Scan Density [Best Count:Actual Count].......: 12.62% [29190:231326]
    - Extent Scan Fragmentation ...................: 9.80%
    - Avg. Bytes Free per Page.....................: 7775.4
    - Avg. Page Density (full).....................: 3.94%
    DBCC execution completed. If DBCC printed error messages, contact your system administrator.
    Last edited by v8ramair; 12-11-08 at 16:00.

  8. #8
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,799
    Provided Answers: 11
    use your script to see what the largest table is. Watch the growth of that table, or watch to see if that top table gets unseated from its spiot as the largest table.

  9. #9
    Join Date
    Dec 2008
    Posts
    9
    Oh I see what you're saying...

    Yes, the last time I ran the import application, I constantly ran that script. As the row count went up, so did the unused space for that table. That table was always top, although one reason could be that the app is inserting into it a lot. But the unused space was growing extremely fast as rows were being inserted. However, some other tables seem to have this issue, just not as bad as this table.

  10. #10
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    how are you "importing" the data?
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  11. #11
    Join Date
    Jan 2004
    Location
    In a large office with bad lighting
    Posts
    1,040
    Looking at the SHOWCONTIG results, I see what appears to be approx 1 row per datapage (page density 3.94% and low scan density, 7775 bytes free avg on an 8000 byte data page) ... not very good. I don't see the schema of the table. So ...

    1. What is your database default fill factor? (see BOL: fill factor & fill factor option)
    2. What is the fill factor for the table shown in the scan?
    3. Are there indexes on this table? If so, what is the pad value on them? (see BOL: create index)
    4. How about a look at the table schema?

    -- This is all just a Figment of my Imagination --

  12. #12
    Join Date
    Dec 2008
    Posts
    9
    I think I found the issue. I changed the primary key to a clustered index and now the unused space is ~100 MB on the table, rather than 1 TB (that's what it would have been if I wouldn't have changed it).

    I ran my import app and each time it did about a few hundred inserts into the table, the unused space would skyrocket. I then stopped the import, changed it to a clustered index, and watched the growth slow to nothing. 2,300,000 total inserts became 100 MB unused. Prior to changing the clustered index setting, I was up to 500,000 inserts and 50 GB in unused space. I still don't get it though - none of the other sites have had this issue, none are clustered, and they've had imports 100x this size. Very strange.

    BTW, the fill factor was 80 for the table indexes. If I still have more issues, I can post the table structure. Thanks.
    Last edited by v8ramair; 12-15-08 at 12:14.

  13. #13
    Join Date
    Aug 2008
    Location
    India
    Posts
    55
    Hi ,
    First change Database autogrowth option between 200 to 500 MB , setting to 10% is not advisable . Just give u can example . If ur database size is around 40GB , then once free space in DB fills up , it will start growing 10% of 40GB is 4 GB , hence current size will become 44GB , next growth will be 10% of 44 GB , so better thing is to change db growth . Secondly if its not very critical server , check all the DTS packages and jobs which is using the DB run the profiler for required duration when u see rapid db growth and analyse the trace and come to conclusion ( Please ignore my update if already tried )

  14. #14
    Join Date
    Jun 2004
    Location
    Long Island
    Posts
    696
    Rebuild/ReOrg your indexes regularly.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •