Results 1 to 5 of 5
  1. #1
    Join Date
    Jun 2004
    Posts
    2

    Unanswered: Blocking SPID ... no apparent reason

    Hi,
    I hava a JAVA application that updates a SQL2000 (SP3a)database.
    The application handles different types of "jobs" which effectively update the DB.

    One job in particular appears to block all subsequent jobs. It comprises of a large amount of inserts/updates in a single transaction. This is necessary as it is an "all or nothing" scenario - so we cannot break the transaction into smaller ones. The transaction appears to succeed as it reaches the COMMIT TRAN statement without error.
    However the records do not get written to the database.
    EM indicates a large number of locks held on the tables accessed by the transaction and these do not get released.

    Using the SP sp_blocker_pss80, the blocking SPID has a waittime of 0 and a waittype of 0x0000 - the lastwaittype is WRITELOG and its status is AWAITING COMMAND

    I am using MS SQLSERVER JDBC Driver SP2 (considering using jTDS)

    I have tried
    - increasing Transaction Log size
    - Moving Transaction Log to a separate Disk
    - Reducing Isolation Mode to Read Uncommitted
    - Set AutoCOMMIT to true
    - set Close Cursor on COMMIT
    - set SelectMethod to Direct - (we use Cursor by default)

    None of these have succeeded in fixing the issue.

    The job will succeed if it is the first/only job to access the database.
    But if another job precedes it - then the blocking occurs.
    I have verified that the preceding job only holds shared dataabase locks
    before the blocking job is run.

    Each job will use its own JDBC connections to access the database for reading
    purposes, but all of the writing goes through the blocking SPID.

    Any ideas?
    Thanks, Liam

  2. #2
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    If you close the JDBC connection (when the transaction is complete), does that fix the problem?

    -PatP

  3. #3
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    I wonder if it is not a deadlock situation. By default deadlocks are not logged. Try running the following:
    Code:
    dbcc traceon (-1, 1205)
    then run the big update process. This should start logging deadlock information to the SQL Errorlog.

    Except for the data not being written at the end, you have described exactly what locking is designed to do. While someone is writing data to the database, no one else can read that data until they are done. You can potentially try to reduce table locks by checking that the update process is using an appropriate index. Try running the Index Tuning Wizard, and see if it makes any suggestions. This should, of course, be done on a test box first.

  4. #4
    Join Date
    Jun 2004
    Posts
    2
    Hi,
    Thanks for replies.
    1) Connections cannot be terminated after batch jobs complete due to nature of the application

    2) DBCC TRACEON resulted in a number of the following entries in the SQL Log

    Starting deadlock search 5306
    Target Resource Owner:
    ResType:LockOwner Stype:'OR' Mode: S SPID:57 ECID:0 Ec0x484D9510) Value:0x4b0eb320
    Node:1 ResType:LockOwner Stype:'OR' Mode: S SPID:57 ECID:0 Ec0x484D9510) Value:0x4b0eb320
    Node:2 ResType:LockOwner Stype:'OR' Mode: S SPID:53 ECID:0 Ec0x42ECD510) Value:0x4b103b00
    End deadlock search 5306 ... a deadlock was not found.

    3) Index tuner - havent tackled yet

  5. #5
    Join Date
    Jan 2003
    Location
    Massachusetts
    Posts
    5,800
    Provided Answers: 11
    I thought 1205 gave you information about deadlocks in progress. I was apparently wrong. Try this:

    Code:
    dbcc traceoff (-1, 1205)
    go
    dbcc traceon (-1, 1204)

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •