Unanswered: starting to doubt "connection concentrator"
The concept behind the "connection concentrator" always appealed to me and I implemented this at 1 of my clients ( a db2 content manager site) and I am considering this at my other client (a typical db2/was site). Recently I was involved in a process of processing a large file to be parsed and loaded into db2. So I started to script in ksh/awk and the result was a large file containing thousands of generated SQL's and the last line of my script was to invoke "db2batch" and I instructed to it to execute all the SQL's in that file (I generated merge and insert statements, each processing multiple rows).
I am a man so I like to watch
During the execution of the db2batch process I monitored the database with both "db2top" and "db2mon"....
Did not like what I saw: most of the time the session was " Decoupled from coordinator"..... what the *beep*? Why? There is plenty of work to be done!
It seems that the "connection concentrator" ALWAYS releases the agent after the SQL work is done and when the next SQL comes up another agent is assigned from the pool.... works as designed.. okay... but how about my batch-load of work where more than a thousand queries which must be executed as fast as possible? It seems that there is a lot of overhead involved in dropping/assigning agents between 2 SQL's and the is not way to overrule way to connect, is there?
It makes you think: how does this affect your other connections? When you have a WAS client with connection pooling, how does WAS divide the workload over those connections? 1 java process will fire multiple SQL's to process 1 transaction and does that mean that db2 is constantly busy with dropping/assigning agents for that transaction or will it stay active untill a "commit" or "rollback" is executed?
But then again: how can you tell? It is hard to distinguish 1 WAS connection from another and how can you measure the overhead involved by the agent-coordinator?
So I will test my db2batch work again on a test-database with "connection concentrator" disabled and I am very curious about your experiances with the connection concentrator.
Is it as good as it seems or <sigh> "it depends"? I suspect the latter but then you must know on what it depends and how to measure.
Disclaimer:I have not used connection concentrator and have little knowledge on this subject
but how about my batch-load of work where more than a thousand queries which must be executed as fast as possible?
I do not think connection concentrator was designed for an environment like this.
It is for an environment with large number of connections - with each connection being active for a short interval followed by long periods of inactivity.
Of course, it will be nice if an agent is 'coupled' to a connection for an extended period after the transaction is completed - with a parameter to define this period.
1 java process will fire multiple SQL's to process 1 transaction and does that mean that db2 is constantly busy with dropping/assigning agents for that transaction or will it stay active untill a "commit" or "rollback" is executed?
According to this article, the agent is coupled to a connection till commit/rollback.
BTW,did you manage to get any stats on the overhead?
Not yet. The "test" database is shared by a lot of other people so I cannot bring it "up & down" when I feel like it. Soon I will have a sandbox for me to play with and then I will do some tests to compare the effects. I'll keep you all informed.
So on my 32bit aix clients site I will continue to use it because I need every Kb I can spare. My other client has V9.7 on a 64 bit machine with plenty of RAM and there I will withdraw my recommendation to implement this feature.