I keep data in four seperate locations, each with different data retention.
the problem is, i keep the publisher "thin" - only 24 hrs of data) the data in the relevant tables is about 250000 a day) and these deletes are blocked at the subscribers by commenting out the body of the repl proc.
this does however cause a burst in bandwith & locks the subscribers, even though no actual delete takes place.
does anyone know if you can actuaklly NOT REPLICATE delete commands for specified tables at all - so that the message of the delete wont even be SENT to the subscriber?
Unfortunately not an option - you see, the operations are at their peak (USA) when it is the middle of the night here (SA) - and then when it is calm there, all the reporting/work is being done here - so there is no downtime so to speak - it isnt a killer, but it would definitly be cool if i didnt have to replicate the delete commmands at all...
No joy here Desmond. It is the entries in the transaction log that are sent to the distributor which are then forwarded to the subscriber(s) to be implemented via the replication sprocs (congrats on finding those by the way ).
You would either have to hack the transaction log at the publisher to remove them (which would invalidate any backups you have) or find a way at the distributor to remove them.
Bear in mind also, that when a column is updated that participates in an index, it is actually performed with a delete and insert, instead of an update in place. I discovered this with Log Explorer a couple of years back.
The answer to avoiding replicating huge batches of deletes is this:
publish the proc that does the deletes, specify to repicate its execution,
make the one on the subscriber just return 0.
then exec th oine on the publisher - regardless of how many rows it affects on the publisher, only ONE command ets sent to the subscriber - the exec.
huge problem solved...