Results 1 to 5 of 5
  1. #1
    Join Date
    Jan 2007

    Unanswered: Concurrent transaction problem

    Not sure if this belongs here or elsewhere, but I'm desperate so hoping someone here can help, I've done searches for everything I can think of but can't find it.

    Here is the issue,

    I have an address table where because of DB2 constraints (V7 I think?) I cannot have all the address fields as the unique key, as such I have a "non-business" unique key, and then the program that writes records to the table checks for uniqueness on the other fields before it inserts a record. For the sake of simplicity assume the table has three columns, the

    key (integer, non-business object)
    City (varchar(30))
    State (varchar(2))

    The situation I have is that two uncommited transactions are attempting to write to this table at exactly the same time with exactly the same business data, as such both do a read on all the commited transactions, find no record, both generate a key (random number) and then both create a record, so I end up breaking my "program imposed" uniqueness.

    The only way around this that I can think of currently is to have the read to check for uniqueness do an uncommited read as well as a commited read, and wait for the number of records returned to be the same, at that point I'll know that nothing has "pending" records (I can't just make it an uncommited read in case the uncommited transaction ends up getting rolled back).

    Does anyone else have any clever solutions (or even understand what I'm talking about)?

    Many thanks for any suggestions, hopefully this makes some sense. In case it matters the program is written in Cobol, and I think I am on DB2 V7 (but that is not something I can change, although if future versions have functionality that would help I may be able to use that as a good reason that I should upgrade).

  2. #2
    Join Date
    Jan 2003
    Provided Answers: 5
    Why not before you read for uniqueness, you lock the table in exclusive mode. That way no one else can do anything until you are done and you will no longer get this scenario.


  3. #3
    Join Date
    Jan 2007

    Thanks for the response, not sure I can lock the table because unfortunately the units of work can be big and it would end up causing deadlocks because of the number of other transactions trying to access the table, basically large system so table locking is out (and because of using a random number for the unique key locking just particular areas I don't believe would work).

  4. #4
    Join Date
    Jun 2003
    Toronto, Canada
    Provided Answers: 1
    Well, you _have_ to serialize access to the table somewhere to ensure uniquieness. In case of a unique index the serialization happens during the insert into the index. Since you don't have an appropriate unique index you'll have to serialize inserts into the table itself.

    Another workaround might be to define a column containing a hash value (e.g. MD5 or SHA-1) calculated for the entire combination of address fields. You would then create a unique index on that column. Note that there is a small (on the order of 1/2**64) probability of hash value collisions; you would have to decide whether you can live with this.
    Last edited by n_i; 01-23-07 at 16:29.
    "It does not work" is not a valid problem statement.

  5. #5
    Join Date
    May 2003
    You can create an after insert trigger that will count the number of rows with your "key" and issue an error if the number is greater than 1.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts