Page 1 of 2 12 LastLast
Results 1 to 15 of 19

Thread: Database Log

  1. #1
    Join Date
    Jan 2008
    Posts
    20

    Unanswered: Database Log

    I have a table that updates about 50000 records a day.
    The recovery model is set to simple. However the Log file
    increases with one update from 1meg to over 20 gig at a time, which causes
    problems on the server. The log file is set to autogrow by 10%
    It did not always grow so fast.

    Is there a way to control this?

  2. #2
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Update in smaller batches?
    The log is using as much space as it requires.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  3. #3
    Join Date
    Apr 2004
    Posts
    14
    Maybe you should consider differential backups instead of simple backups...

    Regards

  4. #4
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    What's a "simple backup"?
    He hasn't said anything about what backups he is doing. He is using simple recovery mode, which does not keep a log of transactions. The transactions are stored in the log file briefly, to allow the transaction to be reversed if it cannot be completed. Then the log space is made available for the next transaction, though it is not released to the file system. So, if he is using simple recovery mode and still has a huge transaction log, that suggests he is performing some extremely large transactions.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  5. #5
    Join Date
    Apr 2004
    Posts
    14
    Well, I was referring to the recovery model when I mentioned backup....

    Oops...

  6. #6
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Then what's a differential recovery model?
    Nevermind....
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  7. #7
    Join Date
    Apr 2004
    Posts
    14

    Arrow

    Simple backup recovery model with a differential backup type - minding...

  8. #8
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    The backups are not going to affect the log size.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  9. #9
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    50,000 rows of data updated/inserted per day seems a little small. From the result given, I suspect (as blindman does, I think) that this is all being done in one big hairy transaction. can I take it this data is brought in via DTS?

    Backups do have a couple of small effects on the transaction log, though. But these only really come into play when you have a very large database, or much larger problems in other places. while I can not prove it out, I believe the transaction log can not be truncated during a backup, since the backup needs the transaction log for consistency purposes.

    Still, if this database is only getting 50,000 rows per day, then this database can not be very large, unless there is a lot (and I mean A LOT) of BLOB data being thrown around.

  10. #10
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    But a 50,000 records Cartesian join could blow up the log.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  11. #11
    Join Date
    Jan 2008
    Posts
    20
    Yes, I use SSIS to bring the data across from source to destination.
    There is a few joins involved. If i update in smaller batches I can control the log, but even updates to the data in the destination blow up the log.
    In the past this was not an issue, only recently created the log problems.

  12. #12
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Are you doing the joins in SSIS?
    Really, you should keep as much code and logic out of DTS packages/SSIS packages as possible.
    In my (modestly, considerable) experience, ETL (Extract/Transform/Load) tools should be used only for EL, and not T. Load the data into a staging table and then write a sproc to cleanse, verify, and process it. Much more robust, much more scalable, much easier to debug.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  13. #13
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    you forgot "much easier to convert when you upgrade to SQL Server 2012, in which SSIS is deprecated".
    Testimonial:
    pootle flump
    ur codings are working excelent.

  14. #14
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Did the release date for SQL 2011 get pushed out again?

  15. #15
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    No, SQL 2011 is still on track for RTM on 2013-04-01.

    -PatP

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •