I insert a record to a table and "later" I update it.
I have two fields to capture time information: Created and LastModified.
My update is very simple: update .... set ..,[LastModifiedDate] = GetDate() where id = @pId.
Now my problem is that I am seeing the created and lastmodified times as the same (in format 2007-09-05 12:38:42.383) !!??!
The record has definitely been updated (other fields are populated).
Maybe, upon the selection of the row, you should also grab the modifiedDate as well as the ID, and then check upon update if the dates are the same, otherwise it means someone has cheanged on you since you last grabbed the row....meaning it would appear a miracle occured.
Now this may not be your problem, but it is definetly a doubt, becausde you can't tell, and I'll always try to code toward absolut...I mean absolutes
Here's a sample
MEPType = @MEPType
, MEPName = @MEPName
, MEPStatus = @MEPStatus
, MEPStatusDate = @MEPStatusDate
, ProjectNum = @ProjectNum
, PropCompDate = @PropCompDate
, Comments = @Comments
, ModifiedDate = GetDate()
, ModifiedBy = @APPUSER
WHERE MEPRecID = @MEPRecID
AND ModifiedDate = @ModifiedDate
SELECT @error = @@ERROR, @rowcount = @@ROWCOUNT
IF @error <> 0
SELECT @Error_Loc = 1, @Error_Type = 50001, @rc = -1
IF @rowcount = 0
IF NOT EXISTS(SELECT * FROM [MEP]
SELECT @Error_Loc = 2, @Error_Type = 50002, @rc = 1
, @Error_Message = 'No Rows Updated. Data has been deleted or ID passed is incorrect'
SELECT @Error_Loc = 3, @Error_Type = 50002, @rc = 2
, @Error_Message = 'Rows Have Been Modified since last SELECT'
This is part of a bigger system so posting code would be confusing but I guess you are suggesting I try to isolate a simple instance of the issue. The operation definitely occurs in 2 distinct steps :
1. insert record and raise an event to tell a listener that the record is available (C# stuff)
2. listener does its thing and updates the record.
I suppose my question really was whether you folks are surprised by this - or whether you believe it you be possible at all? I certainly have put a lot of effort into the performance of this piece of code (real-time system processing 1000s messages from queues, applying them to business data, raising distributed events to remote desktops, etc.) but sure don't believe that it can do even the simplest insert followed by update within the same millisecond!
sure don't believe that it can do even the simplest insert followed by update within the same millisecond!
Easily. Try writing a loop to insert GETDATE into a table. You will get a fair few identical records. BTW - the smallest unit of a SQL Server datetime data type is 3.3 milliseconds - maybe you can believe it can do this in 3 times the time you thought it was
The key point is the amount of time between issuing the two instructions. If this is > 3.3 milliseconds then your code is relevent because it must be wrong.
I fully believe SQL 2005 is this fast. My time trials back in 2004 showed close to 100% improvement over SQL 2000, but it was not a very scientific test. If you need the extra precision, I am afraid you will have to wait until next summer when SQL 2008 comes out, and you can use the new date and time types.
Obviously I'm not a regular user of this forum so I'm not sure of the how you folks communicate/interact so I'll try to clarify.
Maybe, upon the selection of the row, you should also grab the modifiedDate as well as the ID,....
..thing is that I don't have any concurrency issue with this. Just an unexpected situation since upgrading to 2005 (i.e. I have never seen this happen in 2000).
This is a problem? Man, I need a new job.
.. only problem is that I didn't ever expect such a situation. I do, however, use the Create = LastModified in some logic for lack of another field that would tell me if any attempt at all has been made to process this record. The only field/flag I have is a 'Processed' one but it means whether the data was successfully applied to the business data or not.
Probably not explaining that so well - basically the records I'm talking about are xml messages coming off queues. There are three very distinct steps once a message arrives - 1. persist it locally (withing a queue transaction) and only then 2. process it - meaning use its data to update various other data in the database and 3. broadcast the new 'business' data (business objects really) to all desktops. Important point is the taking form the queue and putting in the data is one distinct transactioned step. Processing it an updating the LastModified flag is another so I would have expected at least a millisecond to have passed.
Ummmmmm...are you using stored procedures? Raising an error so you can know, for every row that's inserted...and you think that's fast
..not understanind this post at all - sorry.
Easily. Try writing a loop to insert GETDATE into ...
..yep, fair enough I would expect that. But this is two distinct (almost unrelated) stored procedure calls from two distinct processed!?
The key point is the amount of time between issuing the two instructions
..the only measure I have of this is those timestamps i'm afraid.
Are you just trying to get the actual millisecons the row was inserted?
..No, see above response to MCrowley - I have used the "fact" that they could never be the same to provide some other logic. Bad design and all but I was limited to the existing table structure.
Just to reiterate: This is NOT any kind of major problem. Just a curiousity. It happens maybe a 5 times a day and usually when processing the contents of the message is expected to be very fast anyway - for example, when I immediated find that some content is bad.
I'm not sure what the two distinct processes are. If you mean these are 3GL apps calling SQL Server and therer is communication in between the two statements then that sound odd. If it is all within SQL server then, irrespective of how many sproc calls there are, you would expect it to be lightening fast. Again - it all depends what goes on in between. I'm afraid I don't understand your business processes enough to get a handle on your description above.
You could add the below between the two statements to be absolutely satisfied what is going on.
Pootle's suggestion was what I was thinking of, if this turned out to be a problem. 5 - 10 rows out of 1,000's per day is not a big problem...except for the guy who has to fix 5 - 10 rows per day. i would only add to Pootle's suggestion by adding a check to the update procedure something like
if CreatedDate = getdate()
WAITFOR DELAY '00:00:00:002'
update stuff goes here
This might save you even more devious problems later on.
5 - 10 rows out of 1,000's per day is not a big problem...except for the guy who has to fix 5 - 10 rows per day.
Lol. That is so true. Even after years of working with sets I still sometimes forget that it is the number of distinct problems rather than the sum number of rows affected that determine how much effort I need to put into debugging\ cleansing etc..