I have a database with about 8 million rows. I am preforming updates which are speratic through out the database. (they are in order but it may skip to the next 10th or 100th record before the next update) The database has a clustered index on row_id.

I read the data in with a data reader and only select the records I actually need into the reader.


at the top i have declared a global stored procedure parameter such as this
private SqlParameter m_sqlSPparm_1;
m_sqlSPparm_1 = sqlCmd.Parameters.AddWithValue(sarStoreProcParmVal uesNames[i], string.Empty);



Each time the function is called (For each record) I assign the stored procedure parameters

m_sqlSPparm_1.value = "value i want to update with";
After I assign all the stored procedure parameters I call this for each update.

sqlCmd.CommandText = stored_procedure_name;
sqlCmd.CommandType = CommandType.StoredProcedure;
sqlCmd.ExecuteNonQuery();


Eventually i think SQL server is getting bogged down and eventually stops and results in a timeout error. How can I wait until the stored procedure has updated before moving on to the next block? How do I check to see if sql (queue) is being over loaded? Is it possible to increase the queue size?




here is the stored procedure

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go

ALTER PROCEDURE [dbo].[my_sp]
@parm_1 bigint,
@parm_2 char(40) ,
@parm_3 char(40) ,
@parm_4 char(30) ,
@parm_5 char(2) ,
@parm_6 char(10) ,
@parm_7 char(40) ,
@parm_8 char(4)
AS
BEGIN
SET NOCOUNT ON;

UPDATE table_name_goes_here SET
sql_field_1 = @parm_1,
sql_field_2 = @parm_2,
sql_field_3 = @parm_3,
sql_field_4 = @parm_4,
sql_field_5 = @parm_5,
sql_field_6 = @parm_6,
sql_field_7 = @parm_7
WHERE @parm_row_id = row_id

END