I want to know whether the server uses rowsize*noofrows or the actual size of the row(actual data stored in the row,some columns contain data less than their declared values, like char columns) multiplied by no of rows for calculating Long transactions.
I have a table
Table1 (x char(10),y char(20),z integer).
I insert the rows
The rowsize of the table is 30+integersize bytes
But the actual rowsize is 3+3+integer bytes.
Does the server use the former or the later calculation for Long transactions.
It is not really calculated. It just looks at the space in the logical logs between the begin work statement and the current log position. If this space is larger then the specified percentage (LTXHWM), than IDS will trigger a long transaction event.
And the space used in the log files starting from "begin work" is also consumed by concurrent users.
The more concurrent users with tranansactions, the less space you can use before the system detects a LONG TRANSACTION.