Results 1 to 4 of 4
  1. #1
    Join Date
    Mar 2015
    Posts
    38
    Provided Answers: 1

    Answered: To shrink or not to shrink

    My database is big, million(s) of records big, I am using SQL Server 2008 R2 express for development. I have limited the amount of data to 4 or 5 weeks because of the size limit for Express. I have noticed as the database grows updates get slower. I have seen drops from 33,000 records per minute to 17,000. Because of constant updating and deleting my logs get bigger than the data files. I read in posts the database needed to be backed up and shrunk. I did and speeds increased for a while. Today was researching automating the back up and shrink process and I found several articles about "Stop shrinking your data base." I have seen MSDN articles saying backup an shrink to decrease log sizes. Frankly I am confused. To shrink or not to shrink?

  2. Best Answer
    Posted by Pat Phelan

    "The answer is the ever popular: it depends.

    As a general rule of thumb, shrink is hugely expensive in terms of time and server resources. As a side effect, a shrink also creates internal fragmentation within the database which will further degrade your performance. Transaction log backups will free log space allowing it to be either re-used or if necessary removed (via shrink).

    There are only a few cases where I can justify using a shrink operation.
    1. Unexpected activity causing catastrophic database growth (i.e. SharePoint).
    2. Planned activity that happens rarely (90+ days between occurances).
    3. Database decomissioning (shrink before final backup).
    4. Planned move to other storage (SAN or Windows Server move).

    Other than those cases, if you shrink your database it is a sign that either the DBA isn't doing their job or the business is too cheap to buy sufficient storage for the system.

    Keep in mind that everything you do from the time you shrink a database until you do the next index maintenance (you do use Ola Hallengren's https://ola.hallengren.com/ scripts, or the SQL Server Maintenance plans, right???) will pay the performance price for having done that shrink.

    -PatP"


  3. #2
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    The answer is the ever popular: it depends.

    As a general rule of thumb, shrink is hugely expensive in terms of time and server resources. As a side effect, a shrink also creates internal fragmentation within the database which will further degrade your performance. Transaction log backups will free log space allowing it to be either re-used or if necessary removed (via shrink).

    There are only a few cases where I can justify using a shrink operation.
    1. Unexpected activity causing catastrophic database growth (i.e. SharePoint).
    2. Planned activity that happens rarely (90+ days between occurances).
    3. Database decomissioning (shrink before final backup).
    4. Planned move to other storage (SAN or Windows Server move).

    Other than those cases, if you shrink your database it is a sign that either the DBA isn't doing their job or the business is too cheap to buy sufficient storage for the system.

    Keep in mind that everything you do from the time you shrink a database until you do the next index maintenance (you do use Ola Hallengren's https://ola.hallengren.com/ scripts, or the SQL Server Maintenance plans, right???) will pay the performance price for having done that shrink.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  4. #3
    Join Date
    Mar 2015
    Posts
    38
    Provided Answers: 1
    Thanks for the tip, still confused. On the development server I often load data, then delete the data and reload after a program , index or table change. when your talking abut loading 6-12 million records, any way to speed things up is good. I'm thinking "Unexpected activity causing catastrophic database growth." In production weekly loads will be about 3.5 million to the main table, daily we will add/update a million or two to a 1to1 transaction table. Once a week we will delete old records over 4 weeks old probably about 3.5mil. I need it as lean and mean as I can get. Would deleting a fourth of the database and loading new records back be considered "Planned activity that happens rarely"?
    Last edited by wjburke2; 08-06-15 at 16:02.

  5. #4
    Join Date
    Aug 2008
    Posts
    147
    There are other factors you can explore, which may probably give you better performance, such as Recovery Model, how your data files and log files are managed. For example, a large cost can be allocating space for transaction logs - are you preallocating transaction logs. There are many factors and I'd have to understand the context better , to give more specific advice.
    An excellent article to read is the Data Loading Performance Guide , http://www.sqlserver-dba.com/2011/11...nce-guide.html .
    ---------------------------------
    www.sqlserver-dba.com
    www.dba-db2.com

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •