I have a table 'A' which contains more than 700 millio rows. This table has around 13-14 fields and 12 nonclustered indexes.
There is one primary key on this table which is of type Integer and has non-clustered index on it. I want to change the datatype for this column to BIGINT as we have reached the threshold at positive end and we can not enter any negative values. I tried firing alter command but it took almost 42 hours to change the datatype. Even in the resultset I could find that all the 13 indexes were rebuilt.
Can anyone suggest which will be quicker way to do this without any data issue.
Well it will also take time with such a big table with that many indexes. It is a good opportunity for you to check if all of them are used and get rid of a couple if they are not.
Another way for you to do it is:
backup table schema
bcp data out
truncate table || drop table
drop indexes || recreate table without indexes
alter datatype || when create table do it with the new datatype
recreate pk & indexes ( use consumers and related parameters to speed up index creation, also try to create indexes in parallel )
It would be nice for you to record the time spent and compare it to the previous action you made and post here.
Make sure there is no activity that might use that table, and take a dump before obviously.
All the indexes are getting used as this is important table used for management level reporting. So that is why I cannot keep this table unavailable for longer time.
But as you suggested, I will definitely try the bcp option and let you know the timings.