Results 1 to 10 of 10
  1. #1
    Join Date
    Aug 2003
    Posts
    15

    Question Unanswered: Table Growing Quickly in Size

    I have a table that is growing regularly by a huge amount even though I am deleting all data from the table daily and inserting more or less the same amount of data again.

    The table structure is as follows:
    [dbo].[ProblemTable] ([col_id] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,
    [A] [char] (8) NOT NULL ,
    [B] [char] (9) NULL ,
    [C] [numeric](18, 6) NULL ,
    [D] [numeric](18, 6) NULL ,
    [E] [numeric](18, 0))

    I am only adding 704000 rows. Normally the size of this table is 68000 KB. However, once a week now it is suddenly growing to 6.5Gb and indicating that about 5.6Gb of this is data.

    Has anyone got any idea what could be causing this. There are a couple of hundred tables in this DB and this is the only one on which there appears to be a problem,

    Thanks
    David

  2. #2
    Join Date
    Feb 2003
    Location
    Brisbane, Australia
    Posts
    110

    Re: Table Growing Quickly in Size

    When you say you are deleting all the data from the table, are you using a delete statement or a truncate table statement? I have had similar problems due to corrupt indexes.. Have you run any DBCC's on this table recently?

    Originally posted by davidfinnegan
    I have a table that is growing regularly by a huge amount even though I am deleting all data from the table daily and inserting more or less the same amount of data again.

    The table structure is as follows:
    [dbo].[ProblemTable] ([col_id] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,
    [A] [char] (8) NOT NULL ,
    [B] [char] (9) NULL ,
    [C] [numeric](18, 6) NULL ,
    [D] [numeric](18, 6) NULL ,
    [E] [numeric](18, 0))

    I am only adding 704000 rows. Normally the size of this table is 68000 KB. However, once a week now it is suddenly growing to 6.5Gb and indicating that about 5.6Gb of this is data.

    Has anyone got any idea what could be causing this. There are a couple of hundred tables in this DB and this is the only one on which there appears to be a problem,

    Thanks
    David

  3. #3
    Join Date
    Oct 2001
    Location
    Naples, FL
    Posts
    273
    Good question on truncate versus delete, if you are not in simple recovery mode those deletes will cause your log file to grow and may make it look like your database is full, when it's really the log. If you can get away with truncate it is much faster and minimally logged. To see about your log file run this

    dbcc sqlperf('logspace')

    this will return your total size and percentage used for all of your tran logs on the server. Based on that you might have to truncate them, change the recovery mode or change your statements (or rule out the tran log as a problem!)

    HTH
    ---------------
    Ray Higdon MCSE, MCDBA, CCNA

  4. #4
    Join Date
    Aug 2003
    Posts
    15

    Re: Table Growing Quickly in Size

    Yes, I am indeed using delete rather than truncate because I am using an id field which I would like to remember the last insert value after each delete. But truncate resets the counter. I also have truncate log on checkpoint selected.

    I am using sp_spaceused to find out the increase in table size, which happens to be much the same as the increase in database size.Correct me if I am wrong but I thought sp_spaceused reports disk space used for data and indexes, but does not include logspace which is more to do with database size rather than table size?

    I did run a DBCC CHECKTABLE on it and if I remember correctly I was getting about 1 row per page! Unfortunately I cannot confirm exactly as it happened on Saturday and I dropped and recreated table to temporarily get around problem. It is currently happening about once a week.

    Thanks
    David

  5. #5
    Join Date
    Oct 2001
    Location
    Naples, FL
    Posts
    273
    Are you using SQL 7.0? You mention trunc log on checkpoint, that means you are not having log issues. Have you ran

    dbcc showcontig (tablename)

    That will tell you your fragmentation per page and you might need to reindex.

    Also, keep in mind that sp_spaceused is only as accurate as the sysindexes column, it could report bad info, to force it to update you can run

    sp_spaceused tablename, true

    HTH
    ---------------
    Ray Higdon MCSE, MCDBA, CCNA

  6. #6
    Join Date
    Aug 2003
    Posts
    15
    Ray,

    Thanks for that. The db is version 7.0.

    I didn't use DBCC SHOWCONTIG but should have. It is currently showing
    Avg. Page Density (full).....................: 76.60%
    which is probably a bit low, but I will monitor.

    I don't have a clustered index on the table, just a single index on the identity column.

    If it does turn out that all I need to do to prevent recurrance of problem is to reindex, what I do not understand is why this process has been running daily without any problem for over 2 years until last month. When I first got the problem I dropped and recreated the table but had the problem back within a week.

    Thanks
    David

  7. #7
    Join Date
    Oct 2001
    Location
    Naples, FL
    Posts
    273
    I would highly reccomend a clustered index on your id field. Why? For one reason starting with 7.0 (I believe) heap tables became intelligent in that they will try to reclaim empty space, meaning if I insert 1,000 records then delete 50 random ones, my next insert will search the leaf level data for pages with empty space (high-cost). If you have a clustered index on your table (especially on a sequential key like identity) Your inserts will fall at the bottom of the leaf level and you will have less fragmentation. Keep in mind that if you are doing a lot of deletes you will have some empty space but you can reindex for that purpose.

    HTH
    ---------------
    Ray Higdon MCSE, MCDBA, CCNA

  8. #8
    Join Date
    Aug 2003
    Posts
    15
    With this table I only ever Delete and then Insert all rows once a day so I am not sure about the Cl Index. It does sound like a simple reindex will probably make sure that it doesn't happen again. I will probably let it happen just once more before applying that change so that I can monitor effect of reindex etc.

    The thing that I am still puzzled by is why it worked ok for 2 years, and also why it does not have the same problem on our test server.

    Thanks Ray.

    David

  9. #9
    Join Date
    Aug 2003
    Posts
    15

    Smile

    Ray,

    My table grew in size again last night so I ran statistics as suggested. As you also suggested I changed the index to clustered and it had an immediate effect of reducing the table to it's correct size. Thank you for that.

    What I am a bit puzzled by is the stats before putting the clustered index on the table which by my reading doesn't indicate massive fragmentation. example showcontig:
    TABLE level scan performed.
    - Pages Scanned................................: 717340
    - Extents Scanned..............................: 89682
    - Extent Switches..............................: 89681
    - Avg. Pages per Extent........................: 8.0
    - Scan Density [Best Count:Actual Count].......: 99.98% [89668:89682]
    - Extent Scan Fragmentation ...................: 0.14%
    - Avg. Bytes Free per Page.....................: 2042.8
    - Avg. Page Density (full).....................: 74.76%

    and sp_spaceused:
    name rows reserved data index_size unused
    -------------------- ----------- ------------------ ------------------ ------------------ ------------------
    interface_tool_Recon 709253 5755824 KB 5738744 KB 16224 KB 856 KB


    However, miraculously the change of index to clustered index brought reserved right down to 40000 KB !

    Any explanation would be appreciated.

    Thanks again
    David

  10. #10
    Join Date
    Oct 2001
    Location
    Naples, FL
    Posts
    273
    Reserved for a clustered table is the number of data pages reserved, it is the sum of all the 8-page extents that are available. Meaning, if you are USING (used in sp_spaceused) 57 8kb pages, in reserved you would have 8 extents which would leave you with 64 8kb pages and 7 - 8kb pages reserved (not used and would not affect your fragmentation percentages)

    Glad it worked out for you, the lower reserved means your data is tighter and less fragmented.

    GL
    ---------------
    Ray Higdon MCSE, MCDBA, CCNA

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •