Results 1 to 10 of 10
  1. #1
    Join Date
    May 2016
    Posts
    3

    Unanswered: Large database backup

    We have a somewhat large database server (about 8TB across 5 databases) with very high activity (125 log switches per hour per database). We are looking to reduce our recovery time and the large number of archive logs that would need to be applied takes too much time. What is the best solution for quickly recovering large Oracle databases with high activity (CDP solutions? Disk array snapshots? Oracle Flashback? Oracle Data Gard?) Are there other types of solutions?
    Last edited by chainsaw; 05-27-16 at 12:43. Reason: mis-typed log switch rate

  2. #2
    Join Date
    Jun 2004
    Posts
    793
    Provided Answers: 1
    You'll need to define your requirements a bit better:

    For instance, when you say "quickly recover" - are you looking for minutes? or hours? or instant recovery?

    Are you talking about recovering one item of data? or a table? a corrupt datafile? the whole database?

    There are various methods/solutions available, as you are aware. Each is good in its own way, so without knowing in more detail what it is you're looking for it's a bit hard to recommend one solution over another.
    90% of users' problems can be resolved by punching them - the other 10% by switching off their PCs.

  3. #3
    Join Date
    May 2016
    Posts
    3
    Quote Originally Posted by cis_groupie View Post
    You'll need to define your requirements a bit better:

    For instance, when you say "quickly recover" - are you looking for minutes? or hours? or instant recovery?

    Are you talking about recovering one item of data? or a table? a corrupt datafile? the whole database?

    There are various methods/solutions available, as you are aware. Each is good in its own way, so without knowing in more detail what it is you're looking for it's a bit hard to recommend one solution over another.
    We are looking to improve the restore / recovery time that we see today for the whole database while still being able to do GLR. I'm really looking for high-level approaches to backing up an environment like this. In designing the solution, we would look at a worst-case scenario of a full restore of all data. We are trying to improve the RTO beyond what we are doing today with an RMAN hot backup which takes about 6 hours to lay down the database and 25 hours to restore and recover 2 days worth of archive logs. FWIW the log file size is 4GB and the log switches are actually 125/Hr (2000-3000 logs switches per day).

  4. #4
    Join Date
    Jun 2013
    Posts
    10
    I would say you should look into some kind of replication service. For the database side, that would be either Data Guard or GoldenGate. The latter is more expensive, but supports active-active replication and might make it easier to apply patches.

    http://www.oracle.com/technetwork/da...te-096557.html

    If you're using vmware, there are products like Zerto that can also protect databases, but they're usually more for disaster recovery like Data Guard is. It all depends on your requirements.

  5. #5
    Join Date
    Jun 2004
    Posts
    793
    Provided Answers: 1
    Sorry, I can't help further. Your requirements are beyond my technical expertise. Rather than spin you some waffle I'll bow out & see if someone else can help you.
    90% of users' problems can be resolved by punching them - the other 10% by switching off their PCs.

  6. #6
    Join Date
    Jun 2013
    Posts
    10
    You should look into some kind of replication service. I don't work at an Oracle shop, but GoldenGate seems like the standard for active-active replication. Of course, it's also the most expensive option. If you just need a standby database, Data Guard might be a better fit.

    On the virtualization level, there are also companies like Zerto and Veeam that claim to be able to handle databases. I don't know how well they work or if Oracle would refuse to support you if you ran into database problems.

    The other answer might be faster network and/or storage. Make sure you're not running into a possible hardware bottleneck.

  7. #7
    Join Date
    Jun 2004
    Location
    Liverpool, NY USA
    Posts
    2,503
    Are you trying to protect against hardware failure (RAID and RAC will help) or database corruption?
    Bill
    You do not need a parachute to skydive. You only need a parachute to skydive twice.

  8. #8
    Join Date
    May 2016
    Posts
    3
    Quote Originally Posted by beilstwh View Post
    Are you trying to protect against hardware failure (RAID and RAC will help) or database corruption?
    database corruption

  9. #9
    Join Date
    Jun 2004
    Location
    Liverpool, NY USA
    Posts
    2,503
    The problem with database corruption is that changes are propagated to the database so hardware solutions will not help. To recover from accidental changes to the database then flashback is a good feature. However with a very active database you will need a lot of space for the flashback or your time to fix something is limited.
    Bill
    You do not need a parachute to skydive. You only need a parachute to skydive twice.

  10. #10
    Join Date
    Sep 2016
    Location
    Pune
    Posts
    5

    Large database backup

    Hi ,
    I've implemented this scheme using a read-only replication slave of my database server.

    MySQL Database Replication is pretty easy to set up and monitor. You can set it up to get all changes made to your production database, then take it off-line nightly to make a backup.

    The Replication Slave server can be brought up as read-only to ensure that no changes can be made to it directly.

    There are other ways of doing this that don't require the replication slave, but in my experience that was a pretty solid way of solving this problem.

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •