Results 1 to 8 of 8
  1. #1
    Join Date
    Aug 2004
    Posts
    138

    Unanswered: Unread prefetch pages

    Platform : AIX 5.2
    Version: UDB v8 fp6a

    I have noticed in the snapshot that Unread prefetch pages number is quite high. Can anyone guide me how to reduce this? our tablespace has only one container with extent size 32 and the prefetch size is set to 256. Also the snapshot shows significant prefetch wait time. we have num_ioservers set to 100.

    Is it good to use extended storage if there is lot of memory in the system (16GB) and bufferpool size is limited to 1.75GB in 32bit instance?

  2. #2
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    The prefetch size should normally be equal to the number of containers times the extent size, so I would cut the prefetch size to 32. If you an OLTP system, many people like to disable prefetch since it primarily helps only with tablespace scans and can hurt transaction performance.

    If you have a decision support system, create multiple containers per tablespace and put each container on a separate disk drive (or separate array if using RAID).

    Another way to minimize un-wanted prefetches is to use the OPTIMIZE FOR n ROWS clause in your SQL statements that define cursors which return multiple rows where you will only look at (or fetch) some of them.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  3. #3
    Join Date
    Mar 2003
    Posts
    343
    We asked the question on extended storage and were told that the overhead of managing extended storage was much higher than that of managing bufferpool pages. So, it was not recommended.

    Once you reduce prefetching as recommended by Marcus_A, you should find your bp hit ratios improve. I would take snapshots for a few weeks before and calculate the ratios and then confirm after to see whether there was indeed an improvement.

  4. #4
    Join Date
    Aug 2004
    Posts
    138
    thanks for the replies guys . I will try them and one more question. If extended sorage hurts the performance how about using BLOCK I/O in bufferpools if there is lot of sequential prefetching? has anyone tried it?
    Last edited by chavadb2; 01-03-05 at 12:00.

  5. #5
    Join Date
    Aug 2004
    Posts
    138
    Quote Originally Posted by cchattoraj
    We asked the question on extended storage and were told that the overhead of managing extended storage was much higher than that of managing bufferpool pages. So, it was not recommended.

    Once you reduce prefetching as recommended by Marcus_A, you should find your bp hit ratios improve. I would take snapshots for a few weeks before and calculate the ratios and then confirm after to see whether there was indeed an improvement.
    I guess Overhead is because extended storage option takes one 256MB segment away from database shared memory. So, unless we use extened storage greater than that it's useless. My system has 32GB memory..So, I went ahead assigned 1GB for extended storage. I haven't faced any problems.

  6. #6
    Join Date
    Feb 2005
    Posts
    2

    Talking Estore - Overhead

    We are using Estore on a 128 Gig AIX 16 CPU system.

    During a Siebel conversion - we set estore to 42 Gig - had good success in reuse for the most part.

    It did take the conversion time down a lot.

    The biggest overhead with using Estore is at least two fold.
    1 - the lost of 256MB for a window segment in each PID using that UDB system,

    2 - The searching of the Estore pages for a HIT and then getting a Miss.
    ie - When a buffer pool has to age out pages, it will try and copy it to Estore. That moves a page OUT of Estore (delete the page).
    Keep an eye on your page movement To Estore and From Estore. Counters are in the Tablespace Snapshots and the Buffer Pool Snapshots and the Database Snapshots. You want more pages 'copied' From Estore than you want copied To Estore. That means you are having good reuse of the pages. ie - good 'Buffer hit Ratios' in Estore. (I have not heard that phrase, but that is what it really is). I have heard ratios of 5:1 (From:To) as being good.

    In our conversion, we would see lots of pages go into Estore while a table scan was going on, then once it was finished, we would see 100% From Estore with 0 going To Estore for the next number of intervals while the next steps took place and used the prior table space scan information.

    Page Movement is where the CPU time is used up. It takes CPU cycles to move the page to and from Estore. The page can not be read (ie look at the row) until it is copied from Estore to the local Buffer Pool area.

    Any questions ?

    If you use snapshot data at 5 minute intervals, it will tell you tons of good information about your system. TableSpace, Buffer Space and Database are low overheads and give lots of good information on how the system is doing.

    Henry

  7. #7
    Join Date
    Mar 2003
    Posts
    343
    As far as I know, if you have 1 container per tablespace, and do not have the registry variable db2_parallel_io set to * or a tablespace name, prefetching is disabled.

    Also, num_io_servers seems to be quite a large number at 100. Usually it should be a few more than the number of physical devices on which the database resides (perf guide).

    Have you made any changes besides estore? The overhead that I mentioned for estore was about the pages moving between estore and bp very well describe by h3steinhauer. If so has prefetching reduced?

  8. #8
    Join Date
    Aug 2004
    Posts
    138
    That was excellent post by henry. I don't think setting num_ioservers to a large number (100 is very high) adds any considerable overhead..I am seeing imporvements in the prefetching also my ESTORE hit ratio is pretty good

    I have registry variable db2_parallel_io set to *..and I didn't make any other changes
    Last edited by chavadb2; 07-28-05 at 12:25.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •