I was tasked to setup a duplicate copy of our production database with 300GB size.
My plan is to make a full backup of the Server1 and restore full backup to Server2 using standby access. Everyday I will restore transaction logs from Server1 to Server2 and have a full backup of the Server2 everyday and fullbackup of Server1 once a week.
What are the possibilities of having risk on this set-up? What possible problems I can encounter if SERVER1 crashes and restore the full backup from SERVER2?
I'm responding based on your PM. I haven't administered Sybase servers for years, so my knowledge is both rather rusty and somewhat antiquated, but I'll do what I can until one of the Sybase folks joins the fray!
Is your goal to create a "warm standby" on the second server, or to have a working copy of the database so that you can run queries and do other work with it? The two ideas were mutually exclusive at least the last that I knew.
The concept of restoring a backup onto a warm standby machine then applying the log backups to it is known as "log shipping" and it is a good way to have a warm standby server available. The only downside is that anything that writes to the database at all will prevent restoring future log files, effectively freezing the standby database in time at the point where it was written to (INSERT, UPDATE, or DELETE).
If you are looking for a way to simply restore SERVER1, keep the last full backup and every log dump taken since that full backup. When you need to restore the database then restore the full backup followed by each log dump in sequence and you ought to be "good to go" with the database restored to the point of the last log dump.
The method you referred is a cheap way of having a warm standby environment as opposed to say Sybase Replication Server.
The methods I can think of for a DR strategy are.
a) dump/load a.k.a log shipping
b) replication server warm standby
c) replication server table level
d) os level mirroring.
The biggest downside you have with a) and d) is if you have a database corruption, you will carry that forward with dump and load.
With b) and c), you have more upto-date environment and you can switch the primary within a few mins if not seconds and switch it back when its fixed. You can also use it as a DSS environment, thus making more use of the million dollar DR server. The downside is dba should have Replication Knowledge and the Application should be a little bit flexible to accommodate Replication Server.
Unfortunately, other than striping, theres not much you can do to improve dump/load timings. I thought you are going to use it only for DR purpose. In that case why does it matter how much time it takes to load?
Other options are log shipping as you mentioned early OR
OS level sync/ backup with maybe quiesce database (dont have much experience with it). Talk to your sys admins / vendors.
managed to save an hour for backup, restore timings not improved much
How long does the process take?
If you dump with compress:: (minimum compression) and have an NFS mount on server2, only compressed data passes over the network and you can start the load on server2 before the dump is complete.
You might also want to look at a backup-server plugin created by teterin - Not free though
Any idea? I have tried using stripes but I just managed to save an hour for backup, restore timings not improved much.
Among many of its uses, my plugin for Sybase Backup Server will let you transfer the database directly from one server to another -- without creating an intermediate dump file. The target and the source databases will be identical at the time the transfer finishes, even if the source gets modified while the transfer is in progress.
You mentioned budget constraints, but my plugin only costs $160 per server -- probably, less money, than the time of your posting here costs
Is it any different from a named pipe technically?
Yes, you can do it with a named pipe, but it would be rather awkward. You'll need to have the named pipes on both computers, and start your own script outside of Sybase before issuing the dump/load. The script will have either use rsh (or ssh) or something like netcat for the actual streaming of data between computers, which means your 300Gb will be copied between different memory regions (between multiple processes) many times over. My plugin can stream directly to a hostort tuple...
With named pipes you'll also need to compress your dump (even if with minimal compression) and then decompress it, or else Sybase will insist on a seek-able destination (which named pipes aren't). (De)Compression costs CPU time, which will increase the total transfer time, unless the servers are "far" apart and can compress at the rate higher than that of network transfer (well over 10Mb/s on today's LANs). Using CPU for the useless compression will also slow down other work (if any) on both servers.
(Also, I think, the named pipe does not work with some earlier versions of Sybase -- the backup server rejects the destination as being wrong type. But this I'm not 100% sure about -- the other reasons are enough for me .)
Thanks, the reason I asked is, thats probably the only way I can think of you can do externally because you mentioned none of the vendors certify it, but my knowledge is limited ... Im not trying to be smart here Its a very good plugin you wrote Its got a lot of prospects
Hey, none of them would "certify" a named pipe either
If you are are really concerned for reliability, the plugin is not a hack -- it uses the same plugin API as Sybase's own libcompress.so (the one you are invoking, when using 'compress::...' as the dump destination) plugin uses.
Im not using sybase but Im working on it...somebody else paid for it ... relax dude... I appreciate what you have done, but technically I think I have the right to ask questions about your product since you are charging for it and advertising the same