Results 1 to 5 of 5
  1. #1
    Join Date
    Jun 2007
    Posts
    2

    Unanswered: Database Backups & Compression

    Hi All

    I have a database which is 72GB, which is backed up every night as part of the maintenance plan. I have plenty of storage space, and the server that runs the database is fairly powerful (quad-processor 3.2ghz, 64bit, 48GB RAM) and is part of an active-passive cluster. The database backup is also copied to a SAN location.

    My issue is with the size of the backup file. As part of the Disaster Recovery plan, I need to copy this database backup file accross the network to a remote site, so that in the event of a disaster at the site, business can continue at the remote site after restoring the database backup file. However, my database backup file is so big that I cannot copy it accross the network in time for the next morning. I have tried using WinRar and have managed to achieve a file about 20% of its original size, but it takes 2 hours to produce this file.

    Is there any recommended reeading for this type of issue? Log shipping / mirroring has been investigated and will be part of the DR model but the 'powers that be' insist on having a full copy performed to the remote site.

    Any suggestions? Thanks in advance guys n gals :-)

  2. #2
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Log shipping IS a full copy, in the truest sense of the word. It is not a monolithic file, but that is a feature instead of a problem in my opinion.

    -PatP

  3. #3
    Join Date
    Jun 2007
    Posts
    2
    okay for a moment let;
    machine A = live, machine B = warm standby server on a remote site

    I believed that when restoring log shipments to machine B, there are inherent problems, due to the nighly backups that take place on machine A.

    When the log shipping from machine A continues after it has performed a nightly backup, the transcation log entries that were processed and removed during the backup will not be included in the next log shipment to machine B, thus losing a porion of transactions during that given period. Therefore, to successfully restore on machine B, the monolithic database backup file from machine A would be required initially to restore the database and then applying any log shipments that have been shipped after that night's backup.

    Is that incorrect...anybody???

    The problem is getting that monolithic file to copy accross the network in a given timeframe - it's too big to achieve but 'they' are insisting that it is done.

  4. #4
    Join Date
    Jan 2004
    Location
    In a large office with bad lighting
    Posts
    1,040
    Quote Originally Posted by bobfffg
    Is that incorrect...anybody???
    Bzzzzzzzz ... thank you for playing ... you will get your consolation prize as you head backstage

    When databases are in full recovery mode, the backup does not mark the transaction log for re-use. That only occurs after the transaction log backup.

    Prove it to yourself and the PTB (powers that be) by setting up log shipping with a copy of Northwind on the target box ... that way you aren't fighting the database size issue. GO thru the full backup, and tran backup cycles, then make database mods and see if any are lost (hint: they won't be).

    I have, in the past, restored a backup from three months prior and then brought it current by applying log backups from that point forward, even across weekly full backups (storage team issues ... grrrrrrrrrrr!).

    And to make the uneducated happy, you could even slowly copy the full backup across the wire to the failover machine weekly.

    -- This is all just a Figment of my Imagination --

  5. #5
    Join Date
    May 2005
    Location
    South Africa
    Posts
    1,365
    Provided Answers: 1
    The database backup will not truncate the log, so the subsequent log backups will contain all the log entries.

    I have not done this with MS SQL backups yet but
    To speedup the transfer of your 72 GB database backup;
    Consider using rsync
    Also consider using an rsyncable gzip to compress the file
    At minimum compression it should reduce it to 14 GB at acceptable speed (assuming you don't have images inside your database)
    And will allow rsync to only copy the portion inside the gzip file that changed.
    You would probably see that only 1.4 GB of the 14 GB is actually transferred across the line (10 times faster).

    PS. I can understand why they want full backups. You only need a problem with one log backup (missing or damaged file) and you won't be able to recover past that point. A full nightly backup and 15 min log backups make sense to me.
    Last edited by pdreyer; 06-20-07 at 10:11. Reason: PS

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •