Page 1 of 2 12 LastLast
Results 1 to 15 of 17
  1. #1
    Join Date
    Feb 2003
    Posts
    17

    Unanswered: performance problem

    hi,
    I came to know that DB2 performance(no. of trans/sec) will be good compared to other RDBMS. but I am getting very low per. than others(almost 2 times lesser than what is expected).
    1) I am using seperate DB mangaed tablespaces for tables data(size 8K*15000) and indexes(size 8K*5000).
    2) bufferpool of size 8K*1000.
    3) MAXAPPLS and MAXAGENTS is 200
    but I am not sure about LOCKS here is the info collected while the application is running. using db2 "get db cfg for xxx"

    High water mark for connections = 51
    Application connects = 86
    Secondary connects total = 0
    Applications connected currently = 43
    Appls. executing in db manager currently = 14
    Agents associated with applications = 43
    Maximum agents associated with applications= 51
    Maximum coordinating agents = 51

    Locks held currently = 123
    Lock waits = 17211
    Time database waited on locks (ms) = 5002484
    Lock list memory in use (Bytes) = 30996
    Deadlocks detected = 0
    Lock escalations = 0
    Exclusive lock escalations = 0
    Agents currently waiting on locks = 5
    Lock Timeouts = 0

    can any one help me in tuning the DB2 to improve its performance, I am making use of DB2CLI calls to connect to the server thru C code.

    thanks in advnc
    -ravi.

  2. #2
    Join Date
    Apr 2003
    Posts
    191

    Re: performance problem

    Hi ravi,

    your problem description is quite generic. The only hints to performance problems are

    Lock waits = 17211
    Time database waited on locks (ms) = 5002484
    ...
    Agents currently waiting on locks = 5

    which *may* indicate concurrency trouble.

    Are you able to test your application against a db used exclusively by yourself?

    Johann

    Originally posted by vishwapathi
    hi,
    I came to know that DB2 performance(no. of trans/sec) will be good compared to other RDBMS. but I am getting very low per. than others(almost 2 times lesser than what is expected).
    1) I am using seperate DB mangaed tablespaces for tables data(size 8K*15000) and indexes(size 8K*5000).
    2) bufferpool of size 8K*1000.
    3) MAXAPPLS and MAXAGENTS is 200
    but I am not sure about LOCKS here is the info collected while the application is running. using db2 "get db cfg for xxx"

    High water mark for connections = 51
    Application connects = 86
    Secondary connects total = 0
    Applications connected currently = 43
    Appls. executing in db manager currently = 14
    Agents associated with applications = 43
    Maximum agents associated with applications= 51
    Maximum coordinating agents = 51

    Locks held currently = 123
    Lock waits = 17211
    Time database waited on locks (ms) = 5002484
    Lock list memory in use (Bytes) = 30996
    Deadlocks detected = 0
    Lock escalations = 0
    Exclusive lock escalations = 0
    Agents currently waiting on locks = 5
    Lock Timeouts = 0

    can any one help me in tuning the DB2 to improve its performance, I am making use of DB2CLI calls to connect to the server thru C code.

    thanks in advnc
    -ravi.

  3. #3
    Join Date
    Feb 2003
    Posts
    17

    Re: performance problem

    I tested it against ORACLE, I got good performance (no. of trans/sec), compared with that the performance of DB2 is 2 times less. How to solve this LOCKS problem?

    ravi.

    Originally posted by jsander
    Hi ravi,

    your problem description is quite generic. The only hints to performance problems are

    Lock waits = 17211
    Time database waited on locks (ms) = 5002484
    ...
    Agents currently waiting on locks = 5

    which *may* indicate concurrency trouble.

    Are you able to test your application against a db used exclusively by yourself?

    Johann

  4. #4
    Join Date
    Apr 2003
    Posts
    191

    Re: performance problem

    Hi ravi,

    Oracle uses a version control system with the effect that lock conflicts can be dealt with easier ( readers don't block writers and writers don't block readers). In DB2, you need to have your applications designed properly in order to minimize lock problems.

    The Oracle approach helps concurrency at the cost of throughput.

    I am sorry, but I don't think there is a way to help you unless you get specific. Did you design your code with Oracle as a test bed? It is not easy to write portable code even if you follow sql standards, for performance issues as well.

    Johann

    Originally posted by vishwapathi
    I tested it against ORACLE, I got good performance (no. of trans/sec), compared with that the performance of DB2 is 2 times less. How to solve this LOCKS problem?

    ravi.

  5. #5
    Join Date
    Feb 2003
    Posts
    17

    Re: performance problem

    Johann,
    we will be making use of multiple backends, like MSSQL, ORACLE, INFORMIX now I porting it to DB2 making use of CLI calls(which is similar to ODBC). where as for ORACLE I used embedded sql code. is it that DB2CLI calls are degrading the performance?, any specific care while making use of DB2CLI calls related to these LOCKS and COMMITS. except the performance issue my port is done.:-(

    ravi.

    Originally posted by jsander
    Hi ravi,

    Oracle uses a version control system with the effect that lock conflicts can be dealt with easier ( readers don't block writers and writers don't block readers). In DB2, you need to have your applications designed properly in order to minimize lock problems.

    The Oracle approach helps concurrency at the cost of throughput.

    I am sorry, but I don't think there is a way to help you unless you get specific. Did you design your code with Oracle as a test bed? It is not easy to write portable code even if you follow sql standards, for performance issues as well.

    Johann

  6. #6
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    I would dramatically increase the storage allocated in the LOCKLIST parameter (database performance configuration) and also increase the Buffer Pool size (currently at 8MB). Can't guarantee that this will help, but they can be changed in a matter of minutes, and it may be worth a shot. Obviously you need real available memory on your system to make the increases.

  7. #7
    Join Date
    Aug 2001
    Location
    UK
    Posts
    4,650
    from the little info available, as Marcus has pointed out, your lockwaits seem to be a big(if not the biggest) bottleneck ... in the order of 200 ms is too big ....

    Have a look at your I/O performance also ...

    Cheers

    Sathyaram

  8. #8
    Join Date
    Mar 2003
    Posts
    343
    It would help to know what system this is EEE, EE on what platform and if the table is a partitioned table.

    Check the isolation level for the updates and queries. If the isolation level is set to RR(repeatable read) and if this pulls a thousand rows from the table, then all 1000 rows are locked, even though only 10 qualify. And if this exceeds the lock list value, the optimizer may elect to acquire a table level lock instead.

    The other isolation levels are Read Stability, Cursor Stability and Uncommitted Read. Each isolation level has it's own advantages and disadvantages.

    Cursor stability is the default if isolation level is not specified.

    Hope this helps.

  9. #9
    Join Date
    Feb 2003
    Posts
    17
    I am working on solrais and it is not partitioned table.
    DB2_RR_TO_RS is set to TRUE/YES.
    as you said the table level locking might me getting imposed instead of row level because of which there are many number of locks. How can I granualarize the locking level to row instead of table?.

    I am running an application in which many threads/forked appls will try to access the same set of tables but different rows simultaneously, becuase of the table level locking I think performance might be going down. I tried setting parameters like MAXAPPLS, MAXAGENTS, bufferpool, tablespace, MAXLOCKS, LOCKLIST etc. but of no use :-(

    Originally posted by cchattoraj
    It would help to know what system this is EEE, EE on what platform and if the table is a partitioned table.

    Check the isolation level for the updates and queries. If the isolation level is set to RR(repeatable read) and if this pulls a thousand rows from the table, then all 1000 rows are locked, even though only 10 qualify. And if this exceeds the lock list value, the optimizer may elect to acquire a table level lock instead.

    The other isolation levels are Read Stability, Cursor Stability and Uncommitted Read. Each isolation level has it's own advantages and disadvantages.

    Cursor stability is the default if isolation level is not specified.

    Hope this helps.

  10. #10
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Did you try MAXLOCKS at 100 (100%)? I think that might prevent escalation to table locking unless there is only one applicaiton process running.

    There is a good discussion of these parameters in the "IBM DB2 Universal Database, Administration Guide: Performance."

  11. #11
    Join Date
    Feb 2003
    Posts
    17
    No, I will try it. my MAXAPPLS is 200. if I set MAXLOCKS to 100 won't there be any problem?

    Originally posted by Marcus_A
    Did you try MAXLOCKS at 100 (100%)? I think that might prevent escalation to table locking unless there is only one applicaiton process running.

    There is a good discussion of these parameters in the "IBM DB2 Universal Database, Administration Guide: Performance."

  12. #12
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Setting MAXLOCKS to 100 will help prevent lock escalation unless one application process is using 100% of the locks in the Locklist. I am assuming that this is what you want in your system to allow multiple application threads to have high concurrency and to prevent lock escalation.

    But the system will be spending a lot more time taking row locks than if it escalated to table locking. Preventing lock escalation may not be a problem with lots of concurrent transaction processes running against the same tables, but might slow things down (with extra time spent on row locking) with single threaded batch processing.

    If the multiple processes running against the same tables are not time sensitive, then you might want to let escalation occur (to eliminate the extra time it takes for row locking), but increase the timeout parameter. But this doesn’t sound like what you want in your situation.

    In the above statement, I said MAXLOCKS at 100 will “help” prevent lock escalation. But it is also important to set the memory allocation in LOCKLIST high enough. If the Locklist memory is full, then escalation to table locking will occur regardless of the setting of MAXLOCKS.

    In DB2 OS/390 lock escalation can be disabled for a particular table, regardless of what happens at runtime. But DB2 OS/390 also has page level locks, in addition to row and table level locking.

    As previously stated, using CS isolation level and frequent commits are also important.

  13. #13
    Join Date
    Apr 2003
    Posts
    17

    Re: performance problem

    Your snapshot didn't indicate any lock escalation, so I'm not sure locklist and maxlocks are going to help.

    What is the value of locktimeout? If it's set to infinite (-1) that could be masking some horrendous queries. Setting it to 60 or so may lead to errors in that case, but at least you could see what was holding the locks the longest and leading to timeouts.

  14. #14
    Join Date
    Feb 2003
    Posts
    17

    performance pbm

    Here is the snapshot of db, any comments on the locks held?


    Database Snapshot

    Database name = FWRAVDB
    Database path = /space/db2/db2inst1/db2inst1/NODE0000/SQL00012/
    Input database alias = FWRAVDB
    Database status = Active
    Catalog node number = 0
    Catalog network node name =
    Operating system running at database server= SUN
    Location of the database = Local
    First database connect timestamp = 06-02-2003 23:59:00.272310
    Last reset timestamp =
    Last backup timestamp =
    Snapshot timestamp = 06-03-2003 00:16:20.820979

    High water mark for connections = 51
    Application connects = 86
    Secondary connects total = 0
    Applications connected currently = 27
    Appls. executing in db manager currently = 3
    Agents associated with applications = 27
    Maximum agents associated with applications= 51
    Maximum coordinating agents = 51

    Locks held currently = 94
    Lock waits = 19220
    Time database waited on locks (ms) = 5526535
    Lock list memory in use (Bytes) = 22860
    Deadlocks detected = 0
    Lock escalations = 0
    Exclusive lock escalations = 0
    Agents currently waiting on locks = 0
    Lock Timeouts = 0

    Total sort heap allocated = 0
    Total sorts = 236
    Total sort time (ms) = 3
    Sort overflows = 0
    Active sorts = 0

    Buffer pool data logical reads = 1285168
    Buffer pool data physical reads = 72
    Asynchronous pool data page reads = 0
    Buffer pool data writes = 375
    Asynchronous pool data page writes = 375
    Buffer pool index logical reads = 2387528
    Buffer pool index physical reads = 96
    Asynchronous pool index page reads = 0
    Buffer pool index writes = 163
    Asynchronous pool index page writes = 163
    Total buffer pool read time (ms) = 7
    Total buffer pool write time (ms) = 112
    Total elapsed asynchronous read time = 0
    Total elapsed asynchronous write time = 112
    Asynchronous read requests = 0
    LSN Gap cleaner triggers = 146
    Dirty page steal cleaner triggers = 0
    Dirty page threshold cleaner triggers = 0
    Time waited for prefetch (ms) = 0
    Direct reads = 160
    Direct writes = 0
    Direct read requests = 27
    Direct write requests = 0
    Direct reads elapsed time (ms) = 8
    Direct write elapsed time (ms) = 0
    Database files closed = 0
    Data pages copied to extended storage = 0
    Index pages copied to extended storage = 0
    Data pages copied from extended storage = 0
    Index pages copied from extended storage = 0
    Host execution elapsed time = 1.526129

    Commit statements attempted = 212167
    Rollback statements attempted = 418
    Dynamic statements attempted = 708189
    Static statements attempted = 212585
    Failed statement operations = 20
    Select SQL statements executed = 279311
    Update/Insert/Delete statements executed = 105677
    DDL statements executed = 0

    Internal automatic rebinds = 0
    Internal rows deleted = 0
    Internal rows inserted = 0
    Internal rows updated = 0
    Internal commits = 144
    Internal rollbacks = 1
    Internal rollbacks due to deadlock = 0

    Rows deleted = 20722
    Rows inserted = 26313
    Rows updated = 20408
    Rows selected = 1160369
    Rows read = 3619867
    Binds/precompiles attempted = 0

    Log space available to the database (Bytes)= 20400000
    Log space used by the database (Bytes) = 2516557
    Maximum secondary log space used (Bytes) = 0
    Maximum total log space used (Bytes) = 7056841
    Secondary logs allocated currently = 0
    Log pages read = 0
    Log pages written = 23045
    Appl id holding the oldest transaction = 136

    Package cache lookups = 383512
    Package cache inserts = 35735
    Package cache overflows = 0
    Package cache high water mark (Bytes) = 6553600
    Application section lookups = 708189
    Application section inserts = 358864

    Catalog cache lookups = 35823
    Catalog cache inserts = 49
    Catalog cache overflows = 0
    Catalog cache heap full = 0

    Number of hash joins = 0
    Number of hash loops = 0
    Number of hash join overflows = 0
    Number of small hash join overflows = 0

  15. #15
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Here are some other possible things to look at:

    1. look at the DB2_FORCE_NLS_CACHE registery value if you are using an NLS version of DB2.

    2. I assume that you have dramatically increased buffer pool size from the original 8MB. On a server with 1 GB or more of memory, I would allocate about 1/2 of total system memory to buffer pools (spread among all buffer pools).

    3. I am not sure why you are using 8K pages, especially for the indexes. You might try switching to 4K pages just to see what happens.

    4. Version 8 has type 2 indexes which helps improve concurency. After the upgrade, the indexes must be converted to take advantage of this.

    5. As you know, cursor stability gives better concureny. Query the catalog to make sure that all packages have CS isloation level. In addition, here is some info about read locks quoted from the manual:

    "Locks are acquired even if your application merely reads rows, so it is still important to commit read-only units of work. This is because shared locks are acquired by repeatable read, read stability, and cursor stability isolation levels in read-only applications. With repeatable read and read stability, all locks are held until a COMMIT is issued, preventing other processes from updating the locked data, unless you close your cursor using the WITH RELEASE clause. In addition, catalog locks are acquired even in uncommitted read applications using dynamic SQL."

    6. Look at some of your applications that do not do updates or deletes and see if you can use FOR READ or FETCH ONLY Clause Quoting from the DB2 manual:

    "The FOR READ ONLY clause ensures that the result table is read-only. The FOR FETCH ONLY clause has the same meaning.For result tables where updates and deletes are allowed, specifying FOR READ ONLY may improve the performance of FETCH operations. This possible improvement in performance occurs when the database manager is able to do blocking, rather than exclusive locks, on the data. You should use the FOR READ ONLY clause to improve performance except in cases where queries are used in positioned UPDATE or DELETE statements."

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •