Results 1 to 8 of 8
  1. #1
    Join Date
    Sep 2013
    Posts
    2

    Question Unanswered: Replication and CPU/memory use

    I've started to look at replication (transaction repl.), and I can't help wondering what the CPU and memory toll will be for the server.

    One of the DBs is about 16 Gb, and let's say we add 20.000 records per day to that database.

    We also have a number (about 10) of smaller "satellite" databases varying in size from a few hundred megs to 1-2Gb which will also have about 1000-2000 records added each pr. day.

    How should we expect the CPU/memory load on something like that to be?

    The SQL server runs on a Win2008 R2 with 2.4 GHz CPU and 16 Gb ram.

    I hope one of you gurus have a good estimate (guesstimate?) for me.

    Thanks in advance!

  2. #2
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    IO probably takes a bigger hit than CPU or memory.
    If it's not practically useful, then it's practically useless.

    blindman
    www.chess.com: "sqlblindman"
    www.LobsterShot.blogspot.com

  3. #3
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    Are you going with a continuous replication schedule, or a discrete one? Are the 20,000 inserts done evenly through the day, or concentrated during your business hours? Is the distributor located on the same server as the publisher? Where is the subscriber?

    Either way, I would expect with today's hardware you will not notice a difference at this scale. Certainly, you will see a bunch of disk used when you create the snapshot, but after that the logreader is pretty low impact.

  4. #4
    Join Date
    Sep 2013
    Posts
    2
    Either let the server run the replication every night, or else continuously, I think. Whatever has the least impact on user experience with the programs/websites using the databases that we replicate.

    But I'm not sure which will be the best solution there. I hope you guys can tell me that. We're all quite new to the whole replication thing, and as usual with IT projects, they are to be done ASAP, and thus I decided to seek the wisdom of you guys on this site in the hopes of getting an easy fix...and also learning something new

  5. #5
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    I am not sure I can advise you on the "best" solution from just this, either.

    Is the copy being used only for reporting? Then maybe you actually want to look into log shipping.

    Are you only replicating a small subset of the data, then maybe transactional replication is better.

    If you are setting up a disaster recovery site, then you should probably look into either log shipping, or database mirroring.

    There are a number of tools for moving data from one place to another, and each one has its various pros and cons. Replication is not good at moving stored procedure or index changes from one server to another. Mirroring will leave you with an offline database on the other server, but you can take snapshots of it. With Log shipping, you may need to manage the frequency of the transaction log backups that are applied to the destination server to either minimize how many times things change vs how up to date the destination is.

  6. #6
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    The single biggest hurdle our clients face when implementing Microsoft's Transaction Replication is the requirement for declared Primary Keys. That is often quite hard to implement.

    Without knowing the size of your server I can't give you even a good guess, but if you are running hardware and software that is less than five years old the overhead to manage replication should be almost entirely disk... For the cases that I've implemented the CPU difference is negligible, the memory is small, I/O was a problem in one case with millions of rows updated per day, but disk is always a surprise since transactional replication usually requires somewhere between two and five times the space of the database being replicated.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  7. #7
    Join Date
    Aug 2004
    Location
    Dallas, Texas
    Posts
    831
    Quote Originally Posted by Pat Phelan View Post
    The single biggest hurdle our clients face when implementing Microsoft's Transaction Replication is the requirement for declared Primary Keys. That is often quite hard to implement.
    -PatP
    Ditto! But the option to add a pk is always on the table, or don't replicate the table. I had recent experience here and there were three databases to move then replicate. About 35% of the tables did not have any pk's which is a design flaw to begin with, and these tables were needed for reporting. Oh well.

  8. #8
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    True, and if the client can't implement a DRI PK (Declared Referential Integrity Primary Key) then the only practical alternative is log shipping... Log Shipping ain't pretty, but in that case it is often "good enough" for what they want.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •