Results 1 to 15 of 15
  1. #1
    Join Date
    Nov 2013
    Posts
    35

    Question Unanswered: Cleaning SQL Server

    My SQL server disk space is getting close to full capacity which is causing certain reports that we run via the SQL server to time out because I don't think there is enough space on the server.

    Does anybody have any tips on cleaning out a SQL server? Are there any folders that can absolutely be deleted to clear space? I know on a local computer that the %temp% folder can be cleaned out. I know when dealing with servers you do not want to make to many changes because it can cause major problems down the road.

    Any advice anybody?

  2. #2
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    I have not seen a server get sluggish when there wasn't much free space left. It can complicate loads, and updates, if you run out of space in a file group, or the transaction log can't contain the whole transaction, but SELECT performance is not usually impacted. It may be more likely that you need to update the statistics on your indexes, if you have had a lot of data added in the recent past. What do the execution plans for the SELECT statements in the reports look like?

  3. #3
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    Tread carefully.

    If you don't know what you're doing then it might be best to bring someone in to do the job for you.

    We can give you some advice, but we don't know your environment, backup strategies, etc.
    To start with: identify your data and log files for your databases and note down their sizes. Are any log files an order of magnitude bigger than their respective data files?
    George
    Home | Blog

  4. #4
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5
    this sounds more like a case where due to amount of data access paths may have changed. Take George's suggestion above.
    Dave

  5. #5
    Join Date
    Oct 2014
    Posts
    268
    Provided Answers: 6
    https://msdn.microsoft.com/en-us/library/ms189035.aspx

    What about shrinking the database? ***Maybe George or Dave could elaborate a little more on shrinking.

    What about bringing in a consultant or maybe you could set up a SAN to back up your data?

    Another Article I found that I thought was informative and interesting. https://www.simple-talk.com/sql/data...in-sql-server/

    FYI Not a DBA or expert, but am running into a similiar issue.

  6. #6
    Join Date
    Nov 2013
    Posts
    35
    MCrowley thank you for the replay as always. It seems you are involved on most posts that I place on this forum. In all honesty I do not know what the execution plans for the SELECT statement in the reports look like. I have little to no experience with actual coding of SQL. This is why I need to begin take certs to increase my expertise. I think with everybody else's posts I have more than enough information now to keep me busy to help understand my environment better. When I am finished reading and analyzing I will post with my results.

  7. #7
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Time for you to see what one looks like. When one is running next time, run this:
    Code:
    select r.start_time, r.session_id, s.login_name, s.host_name, s.program_name, u.MBUsed as "TempDB Used MB", r.command, r.blocking_session_id, r.wait_type, r.wait_resource, r.wait_time, r.last_wait_type, t.text
    , p.query_plan
    from sys.dm_exec_requests r join
    	sys.dm_exec_sessions s on r.session_id = s.session_id join
    	(select session_id, sum(internal_objects_alloc_page_count + user_objects_alloc_page_count)/128 as MBUsed, db_name(database_id) as DBName from sys.dm_db_task_space_usage group by database_id, session_id) u on r.session_id = u.session_id cross apply
    	sys.dm_exec_sql_text (r.sql_handle) t 
    	cross apply sys.dm_exec_query_plan (r.plan_handle) p 
    where s.is_user_process = 1
      and r.session_id <> @@spid
    The query plan will look like a URL. Go ahead and click on it, and you will get a pretty picture. The pictures take a while to get used to, so start looking at them, now. If you have a query that is not running, you can always get the estimated query plan by pasting the query in a query window, and clicking on the "Display Estimated Execution Plan" button. Remember: Data flows right to left, logic flows right to left.

  8. #8
    Join Date
    Nov 2013
    Posts
    35
    I will attempt to do this when I get an opportunity. I spoke with a colleague and we are loading 10 GB's onto the C: drive of our SQL server tonight. It is down to 2% free space available. We have a smaller company with only 150 total workstations so therefore I feel this additional 10 GB's of memory will do wonders to help the server. Also I wanted to ask about two simple statements I found while reading up on the links you guys all provided:

    BACKUP LOG <database_name> WITH Truncate_only
    Apparently this is a simple statement which has the net effect of removing all of the inactive transactions from the log file which would of have been otherwise been removed with a standard log backup.

    DBCC SHRINKFILE (filename, target_size)
    This is supposed to shrink the log file via the GUI

    What do you guys think about these statements?

  9. #9
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    What's the recovery model of the database you are wanting to run this against?
    Code:
    SELECT name As database_name
         , recovery_model_desc
    FROM   sys.databases
    George
    Home | Blog

  10. #10
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Quote Originally Posted by LaC0saNostra View Post
    I will attempt to do this when I get an opportunity. I spoke with a colleague and we are loading 10 GB's onto the C: drive of our SQL server tonight. It is down to 2% free space available.
    If your C: drive is 98% full, I'd bet that Windows itself is slow on that machine!

    I think that SQL may be a "victim" in your problem rather than being the "culprit".

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  11. #11
    Join Date
    Aug 2004
    Location
    Dallas, Texas
    Posts
    831
    We can only hope they didn't put the SQL files on the local drive.

  12. #12
    Join Date
    Nov 2013
    Posts
    35
    We do not have the SQL files on the local drive. We have a sandbox drive (E and a SQL Data (D drive. Now that 10 GB's has been partitioned the login errors we were experiencing have gone away. So one of my problems has stopped which is good.

    This report we are attempting to run on Yardi still cannot run. We have an "allscatt" code which pulls every property we have. If running individual properties the report runs fine but as soon as we ask for every property we manage it is still failing.

    Exception Messages:
    Thread was being aborted.
    For a detailed stack trace, click here.Thread was being aborted. at YSI.Residential.CorrespondenceMgmt.ysiReportManage r.DoReport() at YSI.Residential.CorrespondenceMgmt.ResCorresponden ceMgmt.GeneratePdf_Click(Object sender, EventArgs e) at YSI.Controls.ysiButton.OnClick(EventArgs e) at YSI.Controls.ysiButton.RaisePostBackEvent(String eventArgument) at YSI.BasePage.ysiBasePage.RaisePostBackEvent(IPostB ackEventHandler sourceControl, String eventArgument) at System.Web.UI.Page.RaisePostBackEvent(NameValueCol lection postData) at System.Web.UI.Page.ProcessRequestMain()

    This is the error message I get. I'm starting to think this may not have anything to do with server space. Regardless this is what I am trying to work through. Anybody recognize any of these errors?

  13. #13
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    Not familiar with Yardi and the posted error message doesn't exactly give much away.
    If it works for some parameters but not a specific one, I'd suggest that:

    a) the problem is is in the report definition itself (invalid condition)
    b) the amount of dat is too much for the report
    c) the speed at which the data is being returned is too slow (timeout caused by poor performance)
    George
    Home | Blog

  14. #14
    Join Date
    Nov 2013
    Posts
    35
    Yeah I figured it had something to do with timing out for such a large report. It is weird because this problem occurred apparently about a month ago. Before this the "allscatt" was running fine. Nothing was really changed on the servers so that's why I am pondering how to fix this. I do appreciate everybody's input and advice thus far.

    What does recovery model actually mean in lay-man terms?

  15. #15
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    The problem with these "clever" parameters is that they can really kill your performance. I won't go in to too much detail here as to why this happens, but if you share the SQL code with us we might be able to offer alternative approaches.

    Recovery models...

    There are 3 recovery models in SQL Server: SIMPLE, FULL and BULK-LOGGED.


    So you know about database backups.
    If you take a backup every day, and something goes wrong, you can restore your data from your latest backup.
    If you do this, you lose all your data since the latest [good] backup.
    This is basically the SIMPLE recovery model.

    Other recovery models can allow us to restore our database backup and the transactions up to the point of failure.

    The FULL recovery model logs all transactions (in your transaction log file[s]). This is the most complete model allowing for the greatest amount of recovery.

    The BULK-LOGGED model doesn't fully log all transactions. However, it does allow for transaction log backups. Therefore you can recover up to your latest [good] log backup.

    For an example: You take your backups every day at midnight. You find a problem at 1730 and you need to restore from backups.
    In a SIMPLE recovery model you can only restore your backup from midnight.
    In BULK-LOGGED, assuming you take log backups every 3 hours, you could potentially restore up to 1500.
    In FULL recovery, you may be able to recover all the way back to 1730 (immediately prior to the issue).

    That's as simply as I can explain it really.
    I hope that makes sense [and is correct ]

    You may want to read the manual on the subject: https://msdn.microsoft.com/en-us/library/ms189275.aspx
    Last edited by gvee; 02-25-15 at 12:55. Reason: [poor] example added
    George
    Home | Blog

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •