As you are aware, timestampdiff returns an approximate result. If the error is only 1 day off, the result will be off by as much as 86.4 million milliseconds. It gets worse as the error increases. You will need to write a UDF to get a more accurate value.
Thanks for responses. I also found the following method here:
This will do micro seconds. Divide by 1000 to get milliseconds...
CREATE FUNCTION F.MICROSECONDS (X TIMESTAMP, Y TIMESTAMP)
LANGUAGE SQL CONTAINS SQL NO EXTERNAL ACTION DETERMINISTIC RETURN
(DAYS(X) - DAYS(Y)) * BIGINT(86400000000) +
MIDNIGHT_SECONDS(X) * BIGINT(1000000)
- MIDNIGHT_SECONDS(Y) * BIGINT(1000000)
....why was it used the expression
MIDNIGHT_SECONDS(X) * BIGINT(1000000) - MIDNIGHT_SECONDS(Y) * BIGINT(1000000)
(MIDNIGHT_SECONDS(X) - MIDNIGHT_SECONDS(Y)) * BIGINT(1000000)
I modified the code according to your proposition and it works the same. Probably somebody was in a hurry writing the previous version.
Originally Posted by tonkuma
It is not consistent with the expression
(DAYS(X) - DAYS(Y)) * BIGINT(86400000000)
Yeah, but this code was delivered by a guy from IBM so I hope it is correct.
Still I don't understand how IBM can deliver such function like timestampdiff that in my opinion is useless. Is it so difficult to count the number of miliseconds between two timestamps ? I don't think so, there are algorithms to count leap years and leap seconds that should be included in this function.