Results 1 to 7 of 7
  1. #1
    Join Date
    Mar 2012
    Posts
    6

    Unanswered: Performance degradation during bulk inserts

    Hi all,

    I am using the db2 version 9.7.
    When I am simultaneously trying to select the records from a table during bulk insertions in to the same table, the response time for the result-set generated is more. Please suggest me with some solutions to overcome this problem.

    Thanks,
    Kashyap

  2. #2
    Join Date
    Apr 2012
    Posts
    1,035
    Provided Answers: 18
    You are probably experiencing row-locking, or you have an I/O bottleneck.
    Check the isolation level if your applications (that perform the SELECT).
    Check the CUR_COMMIT database configuration value.
    Consider committing the inserts more frequently (smaller units of work).
    Consider the table-design, is it suitable for range partitioning (you can load a partition then attach it...).

  3. #3
    Join Date
    Mar 2012
    Posts
    6
    Thank u db2mor...

    we have kept Row locking for the table with read uncommited isolation level.
    i am commiting for every 1000 records.i did not find any problem with IO.

    i think the table is getting locked when i am selecting data from that table.

    is there any thing i need to configure.Help me plzzz!!!

  4. #4
    Join Date
    Apr 2012
    Posts
    1,035
    Provided Answers: 18
    If you are getting table-locks they might result from lock-escalations.
    Be certain. Verify your assumptions.
    Look in db2diag.log or monitoring table-functions to see if escalations are happening on the table.
    If they are NOT happening then you are not getting table-locks and possibly your isolation level is not UR, or your SELECT have additional lock clauses that you are not posting.
    If lock-escalations *are* happening, adjust locksize and lock thresholds appropriately.
    1000 rows is not a large insert really, provided the commits happen.

  5. #5
    Join Date
    Oct 2009
    Location
    221B Baker St.
    Posts
    486

    bulk insert degradation

    Is there some reason the bulk inserts need to be run concurrent with the other work? Bulk inserts cause many systems to "bog down".

    This sounds like this could be relieved by proper scheduling?

  6. #6
    Join Date
    Mar 2012
    Posts
    6
    db2mor... can u explain this.. i have took this from the logfile.


    2012-05-12-12.26.24.291974+330 I71969782E508 LEVEL: Severe
    PID : 19027 TID : 140640767698688PROC : db2sysc 0
    INSTANCE: xx NODE : 000 DB : xxxx
    APPHDL : 0-10816 APPID: xxxx 1523.120512060259
    AUTHID : xx
    EDUID : 80 EDUNAME: db2agent (xxxx) 0
    FUNCTION: DB2 UDB, relation data serv, sqlrreorg_table, probe:600
    DATA #1 : String, 73 bytes
    Table Schema : abcd
    Table Name : yyyy

  7. #7
    Join Date
    Apr 2012
    Posts
    1,035
    Provided Answers: 18
    A reorg action got an error. Check for deadlocks and locktimeouts. Check for results of reorg.
    Auto-reorgs might get in the way if concurrent with bulked inserts.
    There's little point or use quoting a tiny fragment of db2diag.log without the context, and not relating the fragment to the either the original symptom or even the original table.

    Do not drip-feed clues in your postings. Try to answer all questions before posing new ones. Try to describe your environment better. Post the DB2-server operating system. Test the impact of
    slowing the rate of inserts (short delay after committing, before next batch of inserts). Check if the SELECTS on this table are table-scanning and resolve that.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •