Results 1 to 11 of 11
  1. #1
    Join Date
    Jun 2005
    Posts
    319

    Unanswered: Merge Replication deployments - how do you do it in a timely manner?

    Our current production deployments using merge replication are averaging 4-5 hours - all of which is downtime for our client sites. Hotfix releases require no downtime, I am talking about maintenance releases where articles are changed/added/removed requiring a full snapshot to be taken.

    Management is not happy about this deployment window lasting so long.

    I have two questions:

    1) Our first suggestion is to do a proof of concept of leaving the sites up while the snapshot is being generated and only briefly bringing them down as the snapshot is applied as leaving them down while we smoke test the new application + DB code. Is this easily accomplished of making SQL Server disconnect the taking of the snapshot with the application so we can time this appropriately?

    2) Our snapshots take ~45minutes and we have 3 snapshots that are typically required. We currently run these serially.
    A) Is there a way to take/apply these snapshots in parallel?
    B) We have a SAN on production and have the capability to do SAN replication. I am not too familiar with SAN replication, but can that somehow be used to make these snapshots run in minutes rather then closer to an hour?

    Any links/references that are informative for dealing with merge replication deployment best practices would be much appreciated as well.

    Thanks!
    -John

  2. #2
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    You have to provide the business justification for merge replication first. And then versions of your SQL boxes involved in the current configuration.
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  3. #3
    Join Date
    Jan 2003
    Location
    Nottinghamshire, UK
    Posts
    364
    Gagnon

    1) SQL 2k or 2005 ?
    2) Why are you having to take the Publisher Offline whilst it's generating Snapshots ?
    3) If the Individual Publications do not contain the same Articles/Tables why do you think should'nt you run them in Parralel
    4) When applying these snapshots to the Subscribers why do you think these should not run in parralel.
    5) Is this two way or one way merge replication (ie do your subscribers update data)

    I must admit in the past I've been fairly Blaz'e with Re-Initialising Subscribers just setting of a number of publications off at one time.

    Did'nt seem to cause me any trouble.

    I configured them to run Compressed via FTP to Subscribers I think and let SQL take care of the Mechanics of this part of the operation. Although there are many considerations in how to design your replication system

    There are many Articles about but you need to be familiar With

    Publishers
    Distributors
    Subscribers
    Publications
    Articles/Tables
    Push
    Pull

    Subscribers that update data and send it back to the publisher or subscribers that only read data

    GW
    "Everything should be made as simple as possible, but not simpler." - Albert Einstein
    "Everything should be made as complex as possible, so I look Cleverer." - Application Developer

  4. #4
    Join Date
    Jan 2003
    Location
    Nottinghamshire, UK
    Posts
    364
    rd you beat me to it - lol
    "Everything should be made as simple as possible, but not simpler." - Albert Einstein
    "Everything should be made as complex as possible, so I look Cleverer." - Application Developer

  5. #5
    Join Date
    Jun 2005
    Posts
    319
    My comments in red below
    Quote Originally Posted by GWilliy
    Gagnon

    1) SQL 2k or 2005 ?
    2005
    2) Why are you having to take the Publisher Offline whilst it's generating Snapshots ?
    Subscriber websites - we probably don't and have discussed this - we will do a PoC to make sure it does not affect the sites. I guess my question related to this is if you take a snapshot, doesn't it automatically start applying it when complete? I don't think we can have our sites running with empty tables as the snapshot is being applied
    3) If the Individual Publications do not contain the same Articles/Tables why do you think should'nt you run them in Parralel
    The same tables are replicated across systems:
    Payment table belongs to all 3 subscriptions
    I guess this means we are stuck running them serially? We can queue up the snapshots, but we still need to wait for 1 to finish before it kicks off the next one and finally the last one.

    4) When applying these snapshots to the Subscribers why do you think these should not run in parralel.
    See #3 above
    5) Is this two way or one way merge replication (ie do your subscribers update data)
    Two way merge replication

    I must admit in the past I've been fairly Blaz'e with Re-Initialising Subscribers just setting of a number of publications off at one time.
    Does this mean no snapshot was required? I thought whenever a table was added or an article changed it required an invalidate_snapshot argument set otherwise you get an error saying you must invalidate the snapshot or do you mean something different when you say re-initialize?

    Did'nt seem to cause me any trouble.

    I configured them to run Compressed via FTP to Subscribers I think and let SQL take care of the Mechanics of this part of the operation. Although there are many considerations in how to design your replication system
    These servers are on the same network node connected via a Gigabit ethernet network.
    There are many Articles about but you need to be familiar With

    Publishers
    Distributors
    Subscribers
    Publications
    Articles/Tables
    Push
    Pull

    Subscribers that update data and send it back to the publisher or subscribers that only read data

    GW

  6. #6
    Join Date
    Jun 2005
    Posts
    319
    Quote Originally Posted by rdjabarov
    You have to provide the business justification for merge replication first. And then versions of your SQL boxes involved in the current configuration.
    Do you run bidirectional transactional replication instead? What are your alternatives when you have 3 applications sharing the same data but you do not want to have to bring down all 3 websites when you update a single website (application dlls and db changes)? Or when you need to do server maintenance on the DB Server (hence separate DB clusters).

  7. #7
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    How often your updates are actually happening? If updating, which requires bring down the site, is the only reason for a merge replication, then I'd say you can come up with something less costly in terms of maintenance. I can't think straight without seeing a contract though if your next question is "What should we do instead?"
    "The data in a record depends on the Key to the record, the Whole Key, and
    nothing but the Key, so help me Codd."

  8. #8
    Join Date
    Jun 2005
    Posts
    319
    Quote Originally Posted by rdjabarov
    How often your updates are actually happening? If updating, which requires bring down the site, is the only reason for a merge replication, then I'd say you can come up with something less costly in terms of maintenance. I can't think straight without seeing a contract though if your next question is "What should we do instead?"
    You are losing me with your posts, you are not making much sense.

    When you say how often are our updates happening, are you asking how often are we deploying releases to production? About once a month.

  9. #9
    Join Date
    Jan 2003
    Location
    Nottinghamshire, UK
    Posts
    364
    I can't think straight without seeing a contract though
    LOL, you mercenary

    Gag your Toplolgy seems a bit tricky, many schema changes whilst servicing numerous 24/7 website databases via replication is always gonna be a bit tricky, if you continue on this path you (because only you fully understand your topolgy) will need to get under the hood of how replication actually works (as you are doing).

    If your on a Gigabit Backbone why not have another shared databases with linked servers instead of Replication?
    I understand there could be some good technical & political reasons but it would be easier & probably faster.

    Don't lose sight of the wood because your in the trees. Think maintanence & Total Cost of ownership.

    GW
    "Everything should be made as simple as possible, but not simpler." - Albert Einstein
    "Everything should be made as complex as possible, so I look Cleverer." - Application Developer

  10. #10
    Join Date
    Jun 2005
    Posts
    319
    Quote Originally Posted by GWilliy
    LOL, you mercenary

    Gag your Toplolgy seems a bit tricky, many schema changes whilst servicing numerous 24/7 website databases via replication is always gonna be a bit tricky, if you continue on this path you (because only you fully understand your topolgy) will need to get under the hood of how replication actually works (as you are doing).

    If your on a Gigabit Backbone why not have another shared databases with linked servers instead of Replication?
    I understand there could be some good technical & political reasons but it would be easier & probably faster.

    Don't lose sight of the wood because your in the trees. Think maintanence & Total Cost of ownership.

    GW
    Hey GW -

    Our previous implementation was a bunch of 1-way transactional subscriptions with linked server calls. You are right in that latency is much faster and deployments much smoother (only that article is snapshotted when added, not a full snapshot). The problem we had is that now there was a dependency between application servers, if one server needed to go down (i.e. our internal website sql server) then that would affect all subscribers which have direct linked server calls to this "master" DB. So I don't think we want to go back to the non-autonomous design.

  11. #11
    Join Date
    Jan 2003
    Location
    Nottinghamshire, UK
    Posts
    364
    Hmmmm - OK

    But it only takes seconds (!!manually!! faster programmatically) to change the Target Server Name within the Linked Server Settings!!

    If you maintained a copy of these master tables on another server you would have resilience even if your core server went down (unlikely)

    Just a thought
    "Everything should be made as simple as possible, but not simpler." - Albert Einstein
    "Everything should be made as complex as possible, so I look Cleverer." - Application Developer

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •