I am trying to optimize a system which is handling considerable amount of traffic. Now I am trying to improve the through put still more. There was a row level lock for every request on a table which has 4million rows. I am trying to avoid this locking mechanism by inserting a unique value(one column) into a small table and deleting it once serviced.
The problem is insert happens successfully, but some times delete fails with the following error "Could not lock a row for delete".
Is there is any way i can put this table in DB server's memory so that no logging is done. The table has a unique index and a PK is also on the field.
These are the errors
Could not position within a file via an index (SQL error: -245, ISAM code: -154).
Could not delete a row (SQL error: -240, ISAM code: -154)
Could not position within a table (psaadmin.online_trans) (SQL error: -243, ISAM code: -154)
Since this table is the entry and exit point of any transaction I cannot put a sleep or increase the time-out waiting period.(Doing so will result in lower performance.)
Is there is any other way to lock a transaction( i.e to sync)
This appear be a problem with SET ISOLATION and not lock release (COMMIT/ROLLBACK delay)...
You need to debug a situation step-by-step and monitor how your locks are created and when they are released to make sure all is running ok.
Need make sure you aren't put any SHARED lock in any row .
If your table are small, remove the PK and create just a CONSTRAINT if need , probably the Informix not using the PK .
To see if the table are in memory , use the command "onstat -P" .
If you work with version 7.31 , you can force the table in memory with SET TABLE xyz MEMORY_RESIDENT , if the version is newer this statement do not have any effect, Informix are smartest enough to keep the table in memory.