Results 1 to 13 of 13
  1. #1
    Join Date
    Nov 2010
    Posts
    6

    Unanswered: DB2 database limits

    This a curious question more than a practical one - I have little experience with DB2.

    As I understand it DB2 has a database and table limit of 512TB. Given that its possible for the larger data storage systems to contain several petabytes now is it possible for a DB2 table to go past the 512TB limit? does partitioning help?

    What other limits would start to impose when you get to that level (specifically with DB2)?

  2. #2
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    You didn't say which version and operating system of DB2 you are using. For DB2 LUW the limits are documented in the Appendix A of the SQL Reference Vol 1.

    For DB2 LUW, the maximum size of a table per database partition (DPF, not Table Partitioning) in a large DMS table space (in gigabytes) is 16,384 if you are using a 32K page size.
    Last edited by Marcus_A; 01-24-11 at 09:38.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  3. #3
    Join Date
    Nov 2010
    Posts
    6
    Quote Originally Posted by Marcus_A View Post
    You didn't say which version and operating system of DB2 you are using.
    Any operating system. I assume LUW all force the 512tb (or less) limit? Does z/OS have different limits?

    Quote Originally Posted by Marcus_A View Post
    For DB2 LUW the limits are documented in the Appendix A of the SQL Reference Vol 1.

    For DB2 LUW, the maximum size of a table per database partition (DPF, not Table Partitioning) in a large DMS table space (in gigabytes) is 16,384 if you are using a 32K page size.
    OK - so a partition can only be 16TB, but how do you say create a 800TB table (with 8000 partitions)?

  4. #4
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by chancey View Post
    Any operating system. I assume LUW all force the 512tb (or less) limit? Does z/OS have different limits?
    I don't know anything about a 512 TB limit for a LUW database. I explained the limits per table above, and you can multiply the max number of tables times the table limit to find the database limit.

    You would have the check the SQL Reference Appendix for z/OS to see their limits.


    Quote Originally Posted by chancey View Post
    OK - so a partition can only be 16TB, but how do you say create a 800TB table (with 8000 partitions)?
    The DPF partitions can be numbered from 0-999, so that would be about a max of 16 Peta Bytes per table.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  5. #5
    Join Date
    Nov 2010
    Posts
    6
    OK, I think wikipedia might be telling fibbs. The IBM website for DB2 9.7 LUW sets the limits as:

    Maximum number of rows in a non-partitioned table, per database partition: 1.28 trillion
    Maximum number of rows in a data partition, per database partition: 1.28 trillion
    Maximum number of table spaces in a database: 32,768
    Maximum size of a table per database partition in a regular table space: 512 GB
    Maximum size of a table per database partition in a large DMS table space: 64 TB
    Maximum number of data partitions for a single table: 32,767

    So in practicality;
    Maximum rows per table: Unlimited (1.28 trillion x 32,767 table spaces)
    Maximum table size: 2 EB (64 TB DMS table space x 32,767 table spaces)
    Maximum database size: 2 EB (64 TB DMS table space x 32,767 table spaces)

    Am I interpreting these numbers correctly?
    Last edited by chancey; 01-24-11 at 10:46.

  6. #6
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    I was looking at the original published version of the 9.7 SQL Reference Vol 1 PDF manual that was obviously in error. The new version of the same manual (Sept 2010) and the InfoCenter website you quoted from have the correct values.

    • Maximum size a table with 32K pagesize in large DMS table space is 64 TB per database partition (as you mentioned).
    • The maximum number of DPF partitions is 1000 (0-999).
    • Therefore, the max table size is 64 Peta Bytes.

    As you also mentioned, there is also a limit of 1.28 Trillion rows per table per database (DPF partition). Since there are a max of 1000 DPF partitions, the max per table would be 1,280 Trillion rows per table.

    So both the max table size (64 Peta Bytes), and max number of rows (1,280 Trillion) need to be taken into account.

    I don't think your conclusons based on max tablespaces are relevant for max rows per table or max table size. I am not sure if the max database size you calculated needs to take into account that up to 1000 DPF partitions can be created. But these numbers are getting ridiculously large, so I am getting dizzy thinking about it.
    Last edited by Marcus_A; 01-24-11 at 11:33.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  7. #7
    Join Date
    Jan 2010
    Posts
    335
    Quote Originally Posted by Marcus_A View Post
    • Maximum size a table with 32K pagesize in large DMS table space is 64 TB per database partition (as you mentioned).
    • The maximum number of DPF partitions is 1000 (0-999).
    • Therefore, the max table size is 64 Peta Bytes.
    You can stretch it a little bit more, if you combine DPF with table partitioning.
    I won't do the math here.

    Cheers
    nvk

  8. #8
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by nvk@vhv View Post
    You can stretch it a little bit more, if you combine DPF with table partitioning.
    I won't do the math here.

    Cheers
    nvk
    I am not sure that for DB2 LUW, that the size limits are extended for Table Partitioning (unlike DPF, which does extend the limits).

    Note that in DB2 z/OS each table partition is a separate tablespace, but not so in DB2 LUW.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  9. #9
    Join Date
    Nov 2010
    Posts
    6
    Whether its 64PB or larger the point is that the limit is much higher than the current hardware exists.

    I only brought it up because one of the recent IBM superclusters has 821TB of storage (http://www.tpc.org/results/individua...0100816_ES.pdf)

  10. #10
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by chancey View Post
    Whether its 64PB or larger the point is that the limit is much higher than the current hardware exists.

    I only brought it up because one of the recent IBM superclusters has 821TB of storage (http://www.tpc.org/results/individua...0100816_ES.pdf)
    That particular configuration uses DPF (database partitioning feature) also known as DB2 InfoSphere Warehouse Edition. The config uses 96 DPF partitions spread across 3 physical servers (32 partititions per server), with two CPU cores per partition. This kind of config allows for parallel processing of large data warehouses, although this particular config was used for an OLTP benchmark.

    DPF is scalable up to 1000 partitions (0-999) so the amount of data that DB2 can handle in one database is extremely large. Since DPF is scalable across multiple physical servers (acting as one database server to the application), it has extremely high linear scalability (as more servers are added) for complex data warehouse workloads.

    It would also be possible to have an application which uses multiple databases at one time, with two-phase commit across both databases if necessary.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  11. #11
    Join Date
    Nov 2010
    Posts
    6
    Quote Originally Posted by Marcus_A View Post
    The config uses 96 DPF partitions spread across 3 physical servers (32 partititions per server), with two CPU cores per partition. This kind of config allows for parallel processing of large data warehouses, although this particular config was used for an OLTP benchmark.
    What happens if 90% of the transaction work load occurs in just one of the partitions - does this mean 2 CPUs will be flat out trying to cope while all the rest sit around doing nothing? Is this a limitation with DPF partitions?

  12. #12
    Join Date
    May 2003
    Location
    USA
    Posts
    5,737
    Quote Originally Posted by chancey View Post
    What happens if 90% of the transaction work load occurs in just one of the partitions - does this mean 2 CPUs will be flat out trying to cope while all the rest sit around doing nothing? Is this a limitation with DPF partitions?
    The data in a DPF environment is partitioned randomly across the partitions, and is designed to enable DB2 to access the data in parallel for a single SQL query (which works well in cases where there are complex data warehouse queries where a large number of rows need to processed). This is different from Table Partitioning where the data is partitioned by some range of values that may be skewed (and usually is skewed).

    DPF is not usually suitable for OLTP, but IBM was able to make it work for the TPC-C benchmark, for reasons that require some complex explanations and that I don't have time to go into detail about here.
    M. A. Feldman
    IBM Certified DBA on DB2 for Linux, UNIX, and Windows
    IBM Certified DBA on DB2 for z/OS and OS/390

  13. #13
    Join Date
    Aug 2008
    Location
    Toronto, Canada
    Posts
    2,367
    Some info from Large Tablespaces the default in DB2 9 :


    "... To calculate the largest single table you can now have in DB2, first take the largest page size (32k) which supports a single table partition of 16TB in size. Then multiply by the number of table partitions that you can have which is 32,000, and then multiply by the number of database partitions you can have in a DPF cluster (1000). That puts the largest DB2 table that you can have in DB2 9 at half a zettabyte. Yup that's a real word which represents one sextillion bytes (ya that's a real word as well). That's 512,000 Petabytes for just a single table.... "

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •