Results 1 to 4 of 4
  1. #1
    Join Date
    Jun 2009
    Posts
    2

    How to handle version count overflow

    Hello,

    I have the special problem that there can be races between threads/servers to insert a new entry into some table. To handle this I have a locking table with a specific entry for every operation where a race condition can happen. This specific entry is updated in the same transaction in which the insert is done. In case of an optimistic locking exception thrown by hibernate some thread knows that it didn't come in first. So it knows it only needs to reload the data and thus gets the data as inserted by the first thread. The locking table has a unique constraint defined to make sure that there can only be one entry for each operation where there can be a race. When the operation is started the respective entry is loaded and stored in a thread local variable. And when the operation has finished a session.merge(...) is done on the entry in the thread local variable.

    So far so good. Now comes the problem: It is only a question of time till the version count of some entry in the locking table will reach its maximal value beyond it cannot be incremented any more, because the field width of the version count would be exceeded. In my scenario this will take several months or maybe years till it will happen. But it will definitely happen one day. So I'm looking for a solution for this. What would be nice would be to simply reset the version count to 0. But this would certainly result in an optimistic locking exception in case some thread local variable exists that still has a reference to the entry loaded before the version count reset.

    The solution I have found so far is too complicated. Maybe someone else has a better idea. What I do is the following:

    If the version count has reached the max value another entry is created which is then in the following incremented. Problem here is that there can be races conditions in between the change over from the initial entry to the follow-up entry. Now starts the difficult part how to handle this. I thought of a third entry that is updated when the second entry is inserted. Unhappily, this does not work with the @Transactional annotation in Spring. I would have to do a flush to get this to work and in the past I have only seen session.flush(...) resulting in some deadlock. So I have this wild solution I don't like, because to complicated:

    1. Max version count: M. N is some number << M. (much smaller than M)
    2. When the version count has reached M - N, the second entry is created
    3. In the following the session.merge(...) is done on both entries till M has been reached. From that moment on the merge is only done on the second entry. The idea is here that with N being sufficiently large also the longest operations will have been finished till M has been reached.
    4. The first entry is deleted.
    5. When for the second entry M - N has been reached, the same procedure repeats resulting in the first entry being created again and the second one being deleted once M has been reached.

    The advantage of this approach is that the number of entries does not grow beyond 2. The other approach would be not to delete the former entry but keep on creating new entries. I don't like this since this for my taste is a little "messy".

    Thanks to every body who kept on reading this long post up to this point :-). Maybe someone out there has a simpler solution. Would be nice.

    Cheers, Oliver

  2. #2
    Join Date
    Jun 2007
    Location
    London
    Posts
    2,527
    Quote Originally Posted by OlliPlough
    The solution I have found so far is too complicated.
    You're not kidding! I read your post twice and still have no clue what it is you're doing. I'm guessing that it's your solution that might be the problem but I suspect we need more info - things like :
    • what it is your application actually does
    • how many competing processes there are (or might be) in a given time frame
    • what the processes need to do
    • how long each process currently takes
    • table structures and indexes

    Mike

  3. #3
    Join Date
    Jun 2009
    Posts
    2
    Hi Mike,

    The problem I describe is a general problem that is independent from the amount of data, number of threads, number of servers, etc. When the number of threads is large, then you would increase the size of N. I thought my post would be a hard read. I just realized that I assumed implicitly knowledge of Java (thread local variables) and hibernate whereas this seems to be a pure DB-only forum. To explain the issue without the Java/hibernate stuff:

    The locking table has a field with a counter that is used to implement optimistic locking (this is called version field in hibernate). When an update is done and the counter in the meanwhile has been incremented the update statement returns 0 rows (because the where clause fails when the counter has been changed meanwhile by another update statement) indiciating to the caller that the data has been changed in the meanwhile by someone else. This is called merge in hibernate.

    Problem here is that there can be races conditions in between the change over from the initial entry to the follow-up entry.
    Do you understand that there is a problem here with a possible race condition (glitch)? If not, the whole problem and the sketched solution is hard to understand. The idea here with 2 entries that are updated at the same time for a certain time is to create a time window that is large enough that operations that still hold on to a reference to the first entry will have terminated long before N has become 0 and the first entry is deleted.

    I'm using Oracle. Maybe there is an oracle trick to handle this problem. Unhappily, I don't know Oracle that well. But it is anyway about finding a general solution :-).

    Regards, Oliver

  4. #4
    Join Date
    Jun 2007
    Location
    London
    Posts
    2,527
    I thought my post would be a hard read
    ... and it's not getting any clearer.

    You keep on describing your (overly complicated) solution rather than just giving us a simple description of the problem. Most locking issues are simple to solve using transactions while implementing your own locking strategy is likely to just transfer the problem from your original tables to your new locking tables. I'll drop out now until we get a clear and simple description of the problem.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •