Results 1 to 8 of 8
  1. #1
    Join Date
    Mar 2007
    Posts
    35

    Unanswered: limitation of table size in 4k page size tablespace?

    Hi All,
    I am working with db2 v8.2.6 and aix 5.3

    as you know the maximum table size on 4k tablespace is 64 GB my question is:

    64 GB for data portion and 64 GB for index portion ? or data part + index part = 64 GB ?

    I am using SMS tablespace.

    Thank you in advance for your help

  2. #2
    Join Date
    Nov 2005
    Location
    IL
    Posts
    557
    in SMS you store indexes in teh same tablespace. So, 64gb is a combined.
    --
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows

    DB2 v9.7.0.6 os 6.1.0.0

  3. #3
    Join Date
    Mar 2007
    Posts
    35
    Quote Originally Posted by Cougar8000
    in SMS you store indexes in teh same tablespace. So, 64gb is a combined.

    but data part is located in xxx.DAT file and index part is located in xxxx.INI

    already I have a table with 51 GB in data part and 16 GB in index part so the table has 67 GB size and database is working fine. are you sure your comment is correct ? please help

  4. #4
    Join Date
    Nov 2005
    Location
    IL
    Posts
    557
    Yes I am right. you do have a limit of 64gb regardless what data is stored in it and it is per partition. Are you running multi partition or not?

    Check your calculation of the size.
    --
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows

    DB2 v9.7.0.6 os 6.1.0.0

  5. #5
    Join Date
    Mar 2007
    Posts
    35
    Quote Originally Posted by Cougar8000
    Yes I am right. you do have a limit of 64gb regardless what data is stored in it and it is per partition. Are you running multi partition or not?

    Check your calculation of the size.
    I am running single partition and I use the following script for this purpose

    Data size:
    select char(date(t.stats_time))||' '||char(time(t.stats_time)) as
    statstime
    ,substr(t.tabschema,1,4)||'.'||substr(t.tabname,1, 24) as tabname
    , card as rows_per_table
    , decimal(float(t.npages)/ ( 1024 / (b.pagesize/1024)),9,2) as used_mb
    , decimal(float(t.fpages)/ ( 1024 / (b.pagesize/1024)),9,2) as
    allocated_mb
    from syscat.tables t
    , syscat.tablespaces b
    where t.tbspace=b.tbspace
    with ur;

    index size:

    select rtrim(substr(i.tabschema,1,8))||'.'||rtrim(substr( i.tabname,
    1,24)) as tabname
    , decimal(sum(i.nleaf)/( 1024 / (b.pagesize/1024)),12,2) as
    indx_used_pertable
    from syscat.indexes i, syscat.tables t
    , syscat.tablespaces b
    where i.tabschema is not null
    and i.tabname=t.tabname and i.tabschema=t.tabschema and
    t.tbspace=b.tbspace
    group by i.tabname,i.tabschema, b.pagesize with ur;

  6. #6
    Join Date
    Jan 2007
    Location
    Jena, Germany
    Posts
    2,721
    Quote Originally Posted by jsmit
    I am running single partition and I use the following script for this purpose

    Data size:
    select char(date(t.stats_time))||' '||char(time(t.stats_time)) as
    statstime
    ,substr(t.tabschema,1,4)||'.'||substr(t.tabname,1, 24) as tabname
    , card as rows_per_table
    , decimal(float(t.npages)/ ( 1024 / (b.pagesize/1024)),9,2) as used_mb
    , decimal(float(t.fpages)/ ( 1024 / (b.pagesize/1024)),9,2) as
    Personally, I would change this formula to be easier to understand: take the number of pages, multiply it by the page size and then divide the result to get the number in MB:
    Code:
    f.pages * b.pagesize / ( 1024 * 1024 )
    Then you can also stick with integer operations if that's sufficient.

    Have you run RUNSTATS (on table and index) before your calculations to make sure the catalog statistics you are using are correct?

    select rtrim(substr(i.tabschema,1,8))||'.'||rtrim(substr( i.tabname,
    1,24)) as tabname
    , decimal(sum(i.nleaf)/( 1024 / (b.pagesize/1024)),12,2) as
    You ignore inner pages of the B-Trees. While this information is not available from the catalog, you have the number of tree levels and can use that in combination with NLEAF to estimate the number of inner pages.

    where i.tabschema is not null
    That's a pointless predicate. Because a table always has a schema, the value i.tabschema can never be NULL.
    Knut Stolze
    IBM DB2 Analytics Accelerator
    IBM Germany Research & Development

  7. #7
    Join Date
    Mar 2007
    Posts
    35
    Quote Originally Posted by stolze
    Personally, I would change this formula to be easier to understand: take the number of pages, multiply it by the page size and then divide the result to get the number in MB:
    Code:
    f.pages * b.pagesize / ( 1024 * 1024 )
    Then you can also stick with integer operations if that's sufficient.

    Have you run RUNSTATS (on table and index) before your calculations to make sure the catalog statistics you are using are correct?



    You ignore inner pages of the B-Trees. While this information is not available from the catalog, you have the number of tree levels and can use that in combination with NLEAF to estimate the number of inner pages.



    That's a pointless predicate. Because a table always has a schema, the value i.tabschema can never be NULL.

    Thank you Stolze,

    could you please sent your completed code for me.

    Thank you again

  8. #8
    Join Date
    Jan 2007
    Location
    Jena, Germany
    Posts
    2,721
    Which "completed code" are you referring to? I was commenting on your code only.
    Last edited by stolze; 06-08-08 at 18:28.
    Knut Stolze
    IBM DB2 Analytics Accelerator
    IBM Germany Research & Development

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •