Re: Optimizing unique clustered index rebuild
Originally posted by mikebase
Here is the story:
I'm running a stored proc that purges and archives data from one DB to another. It is very slow cuz there are a lot of tables involved with huge rowcounts and I refused to mess with the proc cuz it's not my creation (shame on me).
To speed up the run time of the proc, I drop indexes on the tables to which data is being archived (inserted) and rebuild them after the proc is done. It helps, but as these archived tables are growing larger it takes longer to rebuild their indexes.
Please let me know if there is a way to optimize rebuilding unique clustered indexes. (or tinkering with someonelses code will be my last resort)
we have some maintainence SPROC which drops the indexes and rubuilds them to prevent corruption. As you are creating unique keys over a set of data, RDBMS needs to check for duplicates, then it creates the index. More data results more reads, reads take time.
My approach would be, if enough memory (RAM) is on the server. Is to play around with cache. Increasing the default cache wouldn't hurt, plus you have to figure out your biggest tables and try to bind them to a named cache. It could reduce the reading time of the table while building the indexes because the table is in RAM or partially in RAM.
As I said this is only a proposal, but from the logical point of view it should give you a performance boost. But I think that a backup DB is not really performance oriented, so I would leave it over night building indexes.
Last edited by clarus; 09-09-03 at 12:59.
kdb is the solution of all problems.
Just try it, www.kx.com