I was asked to look into performance issues with some queries. I used the db2expln tool to test the same. The output is attached with this post. I dont know how to interpret the output and what can be done to decrease the query execution time.
What you are seeing is, of course, a dump of the query execution plan. This is a complete breakdown of exactly what the internal query-engine will do to execute your query. It is arranged in a hierarchial fashion and it is exhaustive in its detail. But you should approach reading it "from the top down."
When you are trying to understand an under-performing query, the most important thing to bear in mind is that you are looking for a problem that is big. Big enough to be "human very-noticeable." So, don't sweat the small stuff. Don't get lost in the details. You are looking for a thousand-pound elephant.
A computer will become bogged down in a query, enough for a human to really notice, pretty much only if it gets into a situation where it must repeat an operation in table X "for each and every record in" table Y. The situation becomes human-noticeable when there are several tables doing this at once... and when I see left outer join showing up not just once but several times in the query I feel that I need to look no further. Any computer is going to be bogged down trying to complete such a task using only one query.
Consider this: if tables X, Y, and Z each have 1,000 records in them, a process that chains the three tables together might wind up scanning table Z (worst-case) 1,000 * 1,000 * 1,000 times. "There's your trouble." In simple terms, there's the big fat thousand-pound-elephant problem.
What to do? Simplify the query. Run more than one query in a series to simplify the problem so that the computer does not attempt to fall down this hall-of-mirrors. You can't tweak your way out of a query that's asking the impractical.