Hmm. I've just finished shuddering at the thought of identifying all INSERT, UPDATE and DELETE statements and then calculating the "hit" in terms of the respective size of data affected for each statement. ;-)
What I'm trying to achieve is a guestimate of how large a differential backup would be given a period of time following a full backup. I need this to predict the extra occupancy I would need with an external data vault provider based on retention periods, etc.
I was hoping that there might be an easier way of doing this through Profiler or whatever else...
The only other way that has occurred to me is to use Windows Performance Monitor and to monitor the 'Page Writes/sec' counter every 2 seconds, say, over the appropriate period (24 hours, say). That way, I can make a very crude assumption that the number of pages written corresponding to each data point is A x 2 x 8 kbytes, where A is the number of page writes/sec. I can sum these subtotals and make an estimate of churn that way. Does that seem OK-ish?
As long as you never change (insert/update/delete) the same page twice, that should work nicely. I'll listen for the nice young men on their flying pigs to fly by, singing of your magical powers should that be the case.
The best guesstimate would come from actually doing it. It isn't that tough to set up a backup schedule and measure the output for a week or whatever period you decide is adequate.
Keep in mind that most databases have monthly/quarterly/etc usage patterns, so be careful to either pick the worst case or allow for it in your estimates.