Have the following query. But the same takes 60 minutes for updates (temptable has 50K rows and cl_rel has .35 million rows).
I am trying to fine tune the same. Is there something that can be done?
create unique index idx1 on temptable (sid)
set rel.sid = cl.sid
from temptable cl (index idx1), cl_rel (index idx_clrpt1)
where cl.seq_id > @counter
and cl.seq_id <= (@counter + 1000)
and (rel.sid != cl.sid or rel.sid =NULL)
select @counter =@counter+1000
1. This table has 3 more indices which i don't think have any relevence to the filed being update or don't contain sid. Would dropping them fasten the update.
2. If the other 3 indices have the filed sid, would dropping them speeden up the update.
3. In the update the reference to the 2 indices, idx1 and idx_clrpt1 - does it help fasten the update?
4. If i drop all the indices on the table and run the update, what would be the outcome - my guess is it would slow down the update.
Note:: I cannot replicate the scenario with high load on the DB server n QA, hence unable to test these scenario myself.
Your whole bottleneck in on cl_rel wich is big.
Regarding the indexes, the optmizimer is chosing the one you want, you are forcing it.
Do not do this without indexes,it would be extremly heavy.
Looking at the code, it would be nice that the cl.seq_id is used as well in index. ( because it probably reduces the volume of data )
Check your server configurations for parameter:
sp_configure "enable literal autoparam"
Put it at '1', if its not. Ohterwise the variables on the where clause wont be compiled at once with the query.
So to respond to your questions:
1 - no
2 - no
3 - Yes, but ideal to have index idx_clrpt1 have fields ( sid, seq_id ) - big table may be to big of a task to add this index just for this update, not worthy in global. (dont know the enviroment ).
4 - no