I am using ASE 15.0.3 and have issues with a java process that every now and than throws an Exception when trying to perform a batch insert (via sql query and not stored procedure) in a table. Below is the exception :
JZ0BE: BatchUpdateException: Error occurred while executing batch statement: Your server command (family id #0, process id #61) encountered a deadlock situation. Please re-run your command.
The table in which it is trying to insert has many columns but only one (non-clustered) index on an id (int identity) column.
Other (concurrent) processes that interact with this table can perform one of 2 things :
1) Select data from the table (transaction based)
2) Delete data from the table (transaction based, and first requiring an exclusive table lock with wait 20 sec).
The hunch is that the deadlock comes from the index pages. Cannot convert the locking mechanism to datapages or datarows since the table contains many large variable chars columns, that all add up to more than 8191 bytes and it displays : ... exceeds limit of 8191 bytes for column-offset of variable-length columns in DOL tables. Future inserts to this table may fail.
Any ideas on how this situation can be avoided ? (other than removing the index from the table).
Unless evident, the first action is usually to gather some details about how the deadlocks happen. This is achieved by asking the DBA to activate the config parm "print deadlock info" during a period of time long enough to record some of the deadlocks.
It may happen that some deadlocks are unavoidable in practice. The last resort is then to code some kind of automatic retry for the deadlocked transaction. Remember that the deadlock did rollback the entire transaction, so the whole transaction must be executed again. A sensible improvement to the retry logic should be stop retrying after n consecutive deadlocks; if there's no way, there's no way.