Unanswered: Performance degradation during bulk inserts
I am using the db2 version 9.7.
When I am simultaneously trying to select the records from a table during bulk insertions in to the same table, the response time for the result-set generated is more. Please suggest me with some solutions to overcome this problem.
You are probably experiencing row-locking, or you have an I/O bottleneck.
Check the isolation level if your applications (that perform the SELECT).
Check the CUR_COMMIT database configuration value.
Consider committing the inserts more frequently (smaller units of work).
Consider the table-design, is it suitable for range partitioning (you can load a partition then attach it...).
If you are getting table-locks they might result from lock-escalations.
Be certain. Verify your assumptions.
Look in db2diag.log or monitoring table-functions to see if escalations are happening on the table.
If they are NOT happening then you are not getting table-locks and possibly your isolation level is not UR, or your SELECT have additional lock clauses that you are not posting.
If lock-escalations *are* happening, adjust locksize and lock thresholds appropriately.
1000 rows is not a large insert really, provided the commits happen.
A reorg action got an error. Check for deadlocks and locktimeouts. Check for results of reorg.
Auto-reorgs might get in the way if concurrent with bulked inserts.
There's little point or use quoting a tiny fragment of db2diag.log without the context, and not relating the fragment to the either the original symptom or even the original table.
Do not drip-feed clues in your postings. Try to answer all questions before posing new ones. Try to describe your environment better. Post the DB2-server operating system. Test the impact of
slowing the rate of inserts (short delay after committing, before next batch of inserts). Check if the SELECTS on this table are table-scanning and resolve that.