Results 1 to 7 of 7
  1. #1
    Join Date
    Jan 2003
    Location
    Arizona
    Posts
    7

    Unanswered: Can bufferpool be too large?

    I am running DB2 v7.2 on AIX. It is an OLTP system. We have a batch job that copies the production database and does a restore/roll forward to create a report database. The problem is when the roll-forward step of the job is running, other applications lose thier connection to the server (ATM machines, etc.). The bufferpool size is 256,000 pages (4k each) as shown in syscat.bufferpools, and I'm wondering if this is depriving other applications of necessary resources. The bufferpool hit ration is always > 99%. Is it possible that the BP can be too big? And what about other memory allocations, such as restbufsz, util_heap_sz, and applheapsz. Does anyone think changing these parameters might help?

    The AIX server has 6 GB RAM and 6 CPU. The database is about 23 GB and the db cfg paramers are:

    Database heap (4KB) (DBHEAP) = 9216
    Catalog cache size (4KB) (CATALOGCACHE_SZ) = 256
    Log buffer size (4KB) (LOGBUFSZ) = 16
    Utilities heap size (4KB) (UTIL_HEAP_SZ) = 5000
    Buffer pool size (pages) (BUFFPAGE) = 1000
    Extended storage segments size (4KB) (ESTORE_SEG_SZ) = 65536
    Number of extended storage segments (NUM_ESTORE_SEGS) = 0
    Max storage for lock list (4KB) (LOCKLIST) = 200

    Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 192

    Sort list heap (4KB) (SORTHEAP) = 256
    SQL statement heap (4KB) (STMTHEAP) = 2048
    Default application heap (4KB) (APPLHEAPSZ) = 15000
    Package cache size (4KB) (PCKCACHESZ) = 1024
    Statistics heap size (4KB) (STAT_HEAP_SZ) = 10664

    Interval for checking deadlock (ms) (DLCHKTIME) = 10000
    Percent. of lock lists per application (MAXLOCKS) = 10
    Lock timeout (sec) (LOCKTIMEOUT) = -1

    Changed pages threshold (CHNGPGS_THRESH) = 60
    Number of asynchronous page cleaners (NUM_IOCLEANERS) = 12
    Number of I/O servers (NUM_IOSERVERS) = 36
    Index sort flag (INDEXSORT) = YES
    Sequential detect flag (SEQDETECT) = YES
    Default prefetch size (pages) (DFT_PREFETCH_SZ) = 764

    Track modified pages (TRACKMOD) = OFF

    Default number of containers = 1
    Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 256

    Max number of active applications (MAXAPPLS) = 80
    Average number of active applications (AVG_APPLS) = 3
    Max DB files open per application (MAXFILOP) = 64

    Log file size (4KB) (LOGFILSIZ) = 62500
    Number of primary log files (LOGPRIMARY) = 8
    Number of secondary log files (LOGSECOND) = 4


    Thanks for your help.

  2. #2
    Join Date
    Aug 2001
    Location
    UK
    Posts
    4,650

    Re: Can bufferpool be too large?

    When restore and rollforward is happening, no applications can be connected(can be if it is at a tablespace level ) ...



    256K , 4K database doesn't sound large ....


    HTH

    Cheers

    Sathyaram

    Originally posted by dbscott
    I am running DB2 v7.2 on AIX. It is an OLTP system. We have a batch job that copies the production database and does a restore/roll forward to create a report database. The problem is when the roll-forward step of the job is running, other applications lose thier connection to the server (ATM machines, etc.). The bufferpool size is 256,000 pages (4k each) as shown in syscat.bufferpools, and I'm wondering if this is depriving other applications of necessary resources. The bufferpool hit ration is always > 99%. Is it possible that the BP can be too big? And what about other memory allocations, such as restbufsz, util_heap_sz, and applheapsz. Does anyone think changing these parameters might help?

    The AIX server has 6 GB RAM and 6 CPU. The database is about 23 GB and the db cfg paramers are:

    Database heap (4KB) (DBHEAP) = 9216
    Catalog cache size (4KB) (CATALOGCACHE_SZ) = 256
    Log buffer size (4KB) (LOGBUFSZ) = 16
    Utilities heap size (4KB) (UTIL_HEAP_SZ) = 5000
    Buffer pool size (pages) (BUFFPAGE) = 1000
    Extended storage segments size (4KB) (ESTORE_SEG_SZ) = 65536
    Number of extended storage segments (NUM_ESTORE_SEGS) = 0
    Max storage for lock list (4KB) (LOCKLIST) = 200

    Max appl. control heap size (4KB) (APP_CTL_HEAP_SZ) = 192

    Sort list heap (4KB) (SORTHEAP) = 256
    SQL statement heap (4KB) (STMTHEAP) = 2048
    Default application heap (4KB) (APPLHEAPSZ) = 15000
    Package cache size (4KB) (PCKCACHESZ) = 1024
    Statistics heap size (4KB) (STAT_HEAP_SZ) = 10664

    Interval for checking deadlock (ms) (DLCHKTIME) = 10000
    Percent. of lock lists per application (MAXLOCKS) = 10
    Lock timeout (sec) (LOCKTIMEOUT) = -1

    Changed pages threshold (CHNGPGS_THRESH) = 60
    Number of asynchronous page cleaners (NUM_IOCLEANERS) = 12
    Number of I/O servers (NUM_IOSERVERS) = 36
    Index sort flag (INDEXSORT) = YES
    Sequential detect flag (SEQDETECT) = YES
    Default prefetch size (pages) (DFT_PREFETCH_SZ) = 764

    Track modified pages (TRACKMOD) = OFF

    Default number of containers = 1
    Default tablespace extentsize (pages) (DFT_EXTENT_SZ) = 256

    Max number of active applications (MAXAPPLS) = 80
    Average number of active applications (AVG_APPLS) = 3
    Max DB files open per application (MAXFILOP) = 64

    Log file size (4KB) (LOGFILSIZ) = 62500
    Number of primary log files (LOGPRIMARY) = 8
    Number of secondary log files (LOGSECOND) = 4


    Thanks for your help.

  3. #3
    Join Date
    Nov 2002
    Location
    Delaware
    Posts
    186
    You need to take in account all your db global shared memory parms. DB2 has a limit of 1.6 gig but with overhead its around 1.4 gig. you need to make sure you don't over allocate that memory. Other wise it will cause problems.

  4. #4
    Join Date
    Nov 2003
    Location
    Barcelona (SPAIN)
    Posts
    4

    Question

    Originally posted by quigleyd
    You need to take in account all your db global shared memory parms. DB2 has a limit of 1.6 gig but with overhead its around 1.4 gig. you need to make sure you don't over allocate that memory. Other wise it will cause problems.
    Hello,
    which DB2 parm I need to take in account to calculate my db global shared memory??

    Kind regards.

    Oscar

  5. #5
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    I would look at the DB2 Performance Wizard. You can tell the wizard what percent of total memory you want to give to DB2, and it will allocate the memory among DB2 components based on questions it asks you about your application.

    I would give DB2 about 1/2 to 3/4 of total real memory that the server has available, depending on what other functions the server is doing.

  6. #6
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1

    Re: Can bufferpool be too large?

    Originally posted by dbscott
    I am running DB2 v7.2 on AIX. It is an OLTP system. We have a batch job that copies the production database and does a restore/roll forward to create a report database. The problem is when the roll-forward step of the job is running, other applications lose thier connection to the server (ATM machines, etc.). The bufferpool size is 256,000 pages (4k each) as shown in syscat.bufferpools, and I'm wondering if this is depriving other applications of necessary resources.
    I don't believe bufferpool size has much to do with your problem. I would start with collecting the OS stats during restore and rollforward.

    Do you restore on the same physical machine? When you say "applications lose their connections", do you mean database connections or sockets?

    May be there's excessive CPU utilization during roll-forward and the connections time out?
    ---
    "It does not work" is not a valid problem statement.

  7. #7
    Join Date
    Mar 2003
    Posts
    343
    Restore at the db level is an operation that requires an exclusive lock to the database - the database is inaccessible when this operation is in progress. When a database is in a rollforward pending state, again, the database is inaccessible. Read up recovery in the admin guides and I am sure this will be stated. The inaccessibility of the database has nothing to do with memory allocation.

    If the bufferpool is too large, you will not be able to bring up, ie activate the db. It will start with a default BP of a few pages (the number 16 comes to mind but I'm not sure) but will not allow connections - just let you reduce memory allocation.

    If this is a EE 32-bit system, then you should be able to allocate 1.6GB of memory. If this is EEE and the system has the registry variable db2_force_fcm_bp set to on then you can allocate ~1.4GB. And this would include the database manager shared memory, the database global memory and the application global memory. The Admin guide Performance covers this very well under Chapter 8 : Operational Performance.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •