Results 1 to 9 of 9
  1. #1
    Join Date
    Apr 2003
    Location
    Phoenix, AZ
    Posts
    177

    Unanswered: Event driven log backup

    Has anyone explored a process whereby the Transaction Log would be backup up based upon a defined threshold, ie. 75% full? All the research against news group posts and SQL2000 literature seems to point to scheduling a log backup job on a periodic basis. My workflow isn't 24 hour consistant and even adjusting the interval during the day isn't a good answer when the multiple databases on a server fill their logs at different rates.
    Fred Prose

  2. #2
    Join Date
    Aug 2002
    Location
    Scotland
    Posts
    1,578
    WHat is the issue of scheduling transaction log backup job at regular intervals?

    To create a set of backups, you typically make a database backup at periodic intervals, such as daily, and transaction log backups at shorter intervals, such as every 10 minutes.

    You can schedule the Transaction log backup job at different times for each database without any issue.

    Log shipping is another good tool to takeup the challenge.
    --Satya SKJ
    Microsoft SQL Server MVP
    [IMG]http://sqlserver-qa.net/google_bart.gif[/IMG]

  3. #3
    Join Date
    Apr 2003
    Location
    Phoenix, AZ
    Posts
    177
    Well, the issue is that in our environment, periodic scheduling is not working well. We run 50 different databases (same structure) on a server. Some hit heavy, some very infrequently and in both cases the time of day influences the load. We tend to end up with (too many) VERY small logs most of the day and DB's locking up with logs full on occassion. Letting logs grow and then shrinking them has proven too resource intensive.

    The concept of backing logs based upon either a threshold or "log full" situation is not new, and other DBMS support this, and and as standard feature.
    Fred Prose

  4. #4
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    Satya, and how would Log Shipping answer the poster's question?

    fprose,

    There are several ways how you can accomplish this. I've done it by creating a custom alert that invokes a TrxDump job. Another scheduled task was polling the databases for predefined threshold (different for each database) and raising the error that triggered the alert if the condition was met. I did discover issues with this approach and had to abandon the Alert-based solution because jobs were colliding at times. At that point I had to store the fact that the condition was met with the database name into a table. Yet another job was firing periodically (every minute) and checking for previous instance to complete (exited if not completed) and going through the queue of databases to do the log dump.

    You may discover yet another way of doing it.

  5. #5
    Join Date
    Aug 2002
    Location
    Scotland
    Posts
    1,578
    Why not log shipping, which can take care of transaction log backup without any hitch if involved servers/network is maintained properly. I feel LS is identical solution to take care of backups.

    Firstly you need to make sure to maintain correct sizes for all databases Tlog and then schedule the backup job to truncate the log which will avoid filling up of log.

    You need to workout the exact size to be set for Tlog by the way of monitoring during dbcc checks, db maintenance plan jobs etc.

    In general you can set alerts for 9002 but the nature of DB seems to be volatile and by the time you take action will not suffice the condition and DB will be out of state with errors.
    --Satya SKJ
    Microsoft SQL Server MVP
    [IMG]http://sqlserver-qa.net/google_bart.gif[/IMG]

  6. #6
    Join Date
    Sep 2003
    Posts
    522
    oh yeah, that's an interesting solution, to set up log shipping for threshold-triggered log dump strategy? how does that relate?

  7. #7
    Join Date
    Dec 2002
    Posts
    1,245

    Re: Event driven log backup

    You can use SQL Alerts to take specific actions (such as executing your log backup job) based on performance threshholds.

    However, I have tried this once and I was not thrilled with the results. The alert is constantly monitoring performance values. The TLOG was run repeatedly for several seconds (minutes?) until SQL cleared the space from the log file. It can take some time for a log backup to execute and then truncate and free up space in the log file. There is a feature in the performance alerts setup page to disable repeated running of the alert for a period of time (to allow for the condition to clear).

    Still, I'd be careful on this one. I think I might take a more hands-on approach and try to carve out the DBs into two or more groups based on underlying activity. Then set different job schedules for the log backups.

    If the fifty databases are all structurally the same (did I read that correctly?), couldn't you also unify them into a single db partioned by schema?

    Just a thought....

    Regards,

    hmscott
    Last edited by hmscott; 03-03-04 at 03:37.

  8. #8
    Join Date
    Apr 2003
    Location
    Phoenix, AZ
    Posts
    177

    Re: Event driven log backup

    Thanks, I'll look for the option to disable the repeated running.

    As to the identical copies on a server: When I first arrived here, I asked the same question. Well, there's all sorts of political, legal, and procedural reasons why that can't happen - or at least it can't happen until the application group invests some major time to rewite the application - which isn't going to happen.

    We're developing new applications that will be MSSQL, but our existing databases are Informix and run on two, 4-way AIX servers - with 150 db's spread across the two servers based upon phone company lattas.

    When we identified the potential workload that we envision for the MSSQL environment Microsoft got a glassy look and the HP salesman started shopping for his new Hummer. Based upon benchmarks we calculated a minimum of 4, 16-way HP/Compaq servers - and that's if we didn't do hot backup replication.

    Back to demand driven log backup. Informix shares a log pool for all db's in a instance. That said, we peal off to Tivoli a 12 mb log file every 5 minutes (or less) during peak production. I've noticed the amount of (wall clock) time and resources expended when SQL2000 backups up a log and it has me a little concerned.
    Fred Prose

  9. #9
    Join Date
    Feb 2004
    Location
    San Antonio, TX
    Posts
    565
    you need to create a performance condition alert

    use the database object and the percent log used counter
    set it to equal to 75%
    or
    set it to greater to 75% but you will need to set the alert delay to 15 minutes or longer because the log will be 75% full or greater at all times once the alert goes off..

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •