Results 1 to 3 of 3
  1. #1
    Join Date
    Sep 2003

    Question Unanswered: Total cost v/s Elapsed time

    Iam working in a DW project. We are using UDB 7.1 on AIX 5.1

    Iam a bit confused regarding the optimizer cost and the elapsed time for the report queries. My guess is that the elapsed time is usually directly proportional to the total cost, but for many of the queries we are tuning it is observed that there is an inverse proportionality between these parameters. Hence we are unable to rely on the cost.

    Can someone clarify?

    Also, while trying to tune queries, Iam able to analyze a bit as to how the orgiinal query is re-written by the optimizer, but Iam unable to figure out how the re-written query is transformed to the steps shown in the access for eg. If even a small change is done in the original query, an almost guaranteed change is observed in the re-written query but the access plan doesn't changes that often.

    Thanks in advance
    U can expect a lot of queries from me in near future.......


  2. #2
    Join Date
    Aug 2001

    Re: Total cost v/s Elapsed time

    Unfortunately, timerons are not proportional to the elapsed time of the query.....

    Definition of timeron from IBM Manuals -

    A timeron is a unit of measurement used to give a rough relative estimate of the resources, or cost, required by the database server to execute two plans for the same query. The resources calculated in the estimate include weighted CPU and I/O costs.


    A timeron is an abstract unit of measure. It does not directly equate to any actual elapsed time, but gives a rough relative estimate of the resources (cost) required by the database manager to execute an access plan.

    For your next question on changed query - access plan change relationship, I think the definition of the timerons should explain

    Hope you share your experiences with other users ..

    Visit the new-look IDUG Website , register to gain access to the excellent content.

  3. #3
    Join Date
    May 2003
    The DB2 optimizer tries to determine which access path will be the fastest (taking into account CPU time and I/O time, as Sathyaram mentioned above). It uses statistics (from the runstats utility) and heuristics to try and figure that out. Sometimes the statistics may be out of date (or incomplete), or the heuristics may not be accurately reflecting the actual data distribution of a particular table. If you are having optimization problems, it’s a good idea to make sure you capturing the maximum level of statistics possible with the runstats utility.

    In addition, the optimizer assumes that the database is adequately tuned. If there are problems, such as inadequate temporary space, or contention with other SQL statements, then the estimated cost of the query may be wildly off-base.

    Also, there are several different levels of query optimization that DB2 can perform. As the level increases, the amount of time needed to optimize also increases (often unacceptable in transaction environments with dynamic SQL), but in some cases this can yield dramatic improvements for decision support queries. For more about query optimization levels, see this thread:

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts