Page 1 of 2 12 LastLast
Results 1 to 15 of 17
  1. #1
    Join Date
    Dec 2003
    Posts
    61

    Unanswered: Does DB Size decrease when I delete a huge table ??

    Hi,
    My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.

    I delete a huge table (1.2 million records * 15 columns).
    I checked the db size again. It didnt change.
    Shouldn't it decrease because I delete a huge table ??

  2. #2
    Join Date
    Nov 2003
    Location
    Christchurch, New Zealand
    Posts
    1,618
    The size will only decrease if you use the shrinkdb command.

  3. #3
    Join Date
    Dec 2003
    Posts
    61
    Originally posted by rokslide
    The size will only decrease if you use the shrinkdb command.
    When and why should we use Shrink DB ??

  4. #4
    Join Date
    Dec 2003
    Posts
    61
    Originally posted by rokslide
    The size will only decrease if you use the shrinkdb command.
    Would the deleted data be still on the database physical file ???

  5. #5
    Join Date
    Nov 2003
    Location
    Christchurch, New Zealand
    Posts
    1,618
    The use of shrinkdb depends on who you talk to,... I really don't have a theory about it except to say that if the database is taking up too much space, shrink it...

    As for the data still being in the database file.... I would guess (and it's just a guess) that it would still be there but it would be unaccessible via the database. I think the delete would work much like deleting a file off your hard drive, all that is really deleted is the file header information, the data is still there, but you can't access it unless you use a tool to specificly locate the data.

    Does that make sense? Can anyone else offer more advise??

  6. #6
    Join Date
    Dec 2003
    Posts
    61
    Originally posted by rokslide
    The use of shrinkdb depends on who you talk to,... I really don't have a theory about it except to say that if the database is taking up too much space, shrink it...

    As for the data still being in the database file.... I would guess (and it's just a guess) that it would still be there but it would be unaccessible via the database. I think the delete would work much like deleting a file off your hard drive, all that is really deleted is the file header information, the data is still there, but you can't access it unless you use a tool to specificly locate the data.

    Does that make sense? Can anyone else offer more advise??

    Can u give a quick example of how I use ShrinkDB ???

  7. #7
    Join Date
    Dec 2003
    Posts
    61
    Originally posted by forXLDB
    Can u give a quick example of how I use ShrinkDB ???
    How do I know what size to reduce to ???
    (If the size doesnt decrease even after deleting tables, how do i know what must be the exact size occupied by all the tables)...
    I would have deleted and created lots of huge files involving around a million records atleast 10 times..

    and above all, does the size effect the sql server performance in any way ???

  8. #8
    Join Date
    Nov 2003
    Location
    Christchurch, New Zealand
    Posts
    1,618
    straight from the books online

    This example decreases the size of the files in the UserDB user database to allow 10 percent free space in the files of UserDB.

    DBCC SHRINKDATABASE (UserDB, 10)
    GO
    or you can do it through the EM

  9. #9
    Join Date
    Dec 2003
    Posts
    61
    Originally posted by rokslide
    straight from the books online



    or you can do it through the EM
    i just did use for another test database, it shrunk more than half the size...

    thx anyway !!1

    Still looking for the answer on any performance degradation if the db file size is more ???

  10. #10
    Join Date
    Nov 2003
    Location
    Christchurch, New Zealand
    Posts
    1,618
    Guessing once again I would say that it shouldn't affect the speed to any great extent,.. fragmentation of the database file would.... there are some commands/functions that you can use to find out this kinda stuff but I have never used them so I can't really help alot I'm sorry.

  11. #11
    Join Date
    Dec 2003
    Posts
    61
    Originally posted by rokslide
    Guessing once again I would say that it shouldn't affect the speed to any great extent,.. fragmentation of the database file would.... there are some commands/functions that you can use to find out this kinda stuff but I have never used them so I can't really help alot I'm sorry.
    thx again...

    lemme know any functions !!!

  12. #12
    Join Date
    Dec 2003
    Posts
    454
    Since the log file takes the big size of the database, you can use the following script to reduce the file size:

    USE DatabaseName
    GO
    DBCC SHRINKFILE ('DatabaseName_Log', Size)
    GO

    Here "Size" is the number of MB, i.e., if you put 100, that means the file will be shrinked to 100 MB.

  13. #13
    Join Date
    Feb 2002
    Posts
    2,232
    Since you are already working in em - when you right click on the database go to all task ->shrink database ->files... Under database file, look at the data file and log file. What is the current size/space used ? Are you backing up the database/transaction log ? What is the recovery model ?

    What is the performance issue ?

  14. #14
    Join Date
    Feb 2002
    Posts
    2,232
    You can use "dbcc showcontig" to display fragmentation info. Yes performance will be affected - you need to either drop/recreate the indexes, use dbcc dbreindex or dbcc indexdefrag. Since you deleted so many records (more than half), fragmentation is a good start for performance issues.

  15. #15
    Join Date
    Feb 2002
    Posts
    2,232
    Do you have access to Books Online (BOL) ?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •