Results 1 to 14 of 14
  1. #1
    Join Date
    Jan 2007
    Posts
    56

    Unanswered: Temporarily avoiding writing into trans log

    Guys,

    Is there a way to temporarily disable logging into the transaction log.

    In our system, we perform purging of our database every night, where the purging consists of 2 steps:

    1. For each table, insert the data, to be deleted, into a corresponding "purged" table, to remain there for one day only.

    2. For each table, delete the unnecessary data (i.e. same data stored in purged tables in step 1)

    During these 2 steps, the transaction log grows, and since we perform the transactional log back up, the back up at that time is huge. We are running a bit low on the hard disk space and I'd like to disable logging into the transaction log when these operations are performed.

    I really don't care about being able to recover this data.

    I thought that one option is to set the database to simple recovery, then perform the purging of the database, and then change back to full.

    However, I think that trans log can grow even if recovery model is simple [although you won't be able to retrieve any changes].

    So, is there a way to delete a portion of a table [or insert into it] so that no data is written to a transaction log (I know that we can use TRUNCATE if we need to remove whole table without logging)?


    Thanks a lot

  2. #2
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    The tran log is going to be used even in simple recovery in order to provide the option of rolling back the transaction if the process fails for some reason. But in simple recovery mode the transaction log is truncated after each transation rather than being saved. But if you put the database in simple recovery mode there is no reason or benefit for doing a transaction log dump. As a matter of fact, you CAN'T dump a transaction log when in simple recovery.
    I would suggest running your regular transaction log dumps during the day, doing a final transaction log dump prior to your process, putting the database in simple recovery mode, and then running your process. As soon as your process completes you should put the database back in full recovery and then run your regular database backup.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  3. #3
    Join Date
    Mar 2007
    Location
    Holmestrand, Norway
    Posts
    332
    As I understand the case. You have one table of live data, and one table of one day old historic data, and the case is that the transaction log grows too much during movement of data from one table to another. Blindmans approach appears to work fine, but if you (as me) would like to do the task without breaking the log chain, I might have an alternative approach for you:

    First swich to bulk_logged recovery model. Then drop the table containing the one-day old historic data. Use select...into to recreate the table containing historic data, delete these rows from the table containing live data, then change recovery model back to full. This approach should require quite a bit less logspace compared to running in full recovery model through the whole process.

  4. #4
    Join Date
    Jan 2004
    Location
    In a large office with bad lighting
    Posts
    1,040
    To add to Blind Dude's post ... you could do it in smaller chucks instead of all at once after you place the database in simple recovery mode.

    The tran log is checkpointed after each transaction in simple mode, meaning that once the transaction has been committed that part of the log is marked inactive and can be reused.

    So if you have 100 thousand rows to do, maybe do 10 thousand at a shot. If that's to much ... drop it down a little bit. Keep playing with it until you are happy with the amount of active log space.

    Again ... read the last line of his post. Do a full backup of the database once you put it back in full recovery mode.

    -- This is all just a Figment of my Imagination --

  5. #5
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    You may have specifications that you didn't post, but I would:

    1) Drop the archive table
    2) Rename the production table with the archive table name
    3) create a new production table.
    4) Repeat as necesary for other table pairs.

    This would do the job as specified, with zero log growth.

    -PatP

  6. #6
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Oooh. I wouldn't assume that his production table has no relationships with other tables...
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  7. #7
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    That's why I started with:
    Quote Originally Posted by Pat Phelan
    You may have specifications that you didn't post
    I only solved the problem based on the specifications given, I didn't make up any new ones that the original poster didn't include.

    -PatP

  8. #8
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Quote Originally Posted by Pat Phelan
    That's why I started with:I only solved the problem based on the specifications given, I didn't make up any new ones that the original poster didn't include.
    Your work shows a distinct lack of creativity. C-
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  9. #9
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Quote Originally Posted by blindman
    Your work shows a distinct lack of creativity. C-
    Oh great zot! I would think that you of all people should know better than to encourage my creativity!

    -PatP

  10. #10
    Join Date
    Feb 2004
    Location
    One Flump in One Place
    Posts
    14,912
    FYI
    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=80349

    @sql_er - it looks like this is resolved at SQLTeam. Please could you let us know either way so people know if this thread is actually resolved & closed.
    Testimonial:
    pootle flump
    ur codings are working excelent.

  11. #11
    Join Date
    Jan 2007
    Posts
    56
    Folks, thanks for all the advises. My final approach is similar to suggestions stated here and was resolved (as stated by pootle) here:

    http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=80349


    Thanks a lot for all inputs!

  12. #12
    Join Date
    Nov 2002
    Location
    Jersey
    Posts
    10,322
    Quote Originally Posted by blindman
    Your work shows a distinct lack of creativity. C-
    What? Smoking crack so early?

    In any case, what a mess if you actually have to do this...I'm suprised you didn't ask [why they want to do something as silly as this
    Brett
    8-)

    It's a Great Day for America everybody!

    dbforums Yak CorralRadio 'Rita
    dbForums Member List
    I'm Good Once as I ever was

    The physical order of data in a database has no meaning.

  13. #13
    Join Date
    Feb 2007
    Posts
    62
    I really don't care about being able to recover this data
    Then just delete it. Why keep it if it's not important enough to be handled transactionally or recovered?
    You also need to consider that other processes using this table will need these deletions logged to maintain integrity across their transactions. It's not just about "you"!

  14. #14
    Join Date
    Feb 2007
    Posts
    62
    Oh, and TRUNCATE is logged too, just not every row.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •