Unanswered: Performance issue when adding disks !!!
I am using version 8 fix7 with Sun 5.8 platform. My table is very big 400GB.
I am going to add 1000GB hard disk in to my system. What will be performance effects and CPU utilization in my system.
Because this is a very general question so if any one has materials or website relating to that, please guide me.
As you have indicated, more details are needed to give you more specific "gotchas" or "tips"....
Here are a few generic questions -
What is the objective behind adding this new disk - to provide additional space for the large table (and others) to grow?
What kind of tablespace(s) is that large table contained in - SMS or DMS?
To what are you adding that additional disk -
- to an existing/new filesystem
- to an existing / new volume
- to an existing / new volume group
In short, how will the additional disk affect the filesystems and containers for the tablespace(s) that contains this large table?
Is this new storage going to affect any other tables/tablespaces?
If the large table is in an SMS tablespace, then the only way to allocate more storage to the tablespace is to add the new disk to the same volume (and volume group) as the existing filesystem AND to extend the filesystem. I am hoping that you guys are using some kind of volume manager in your system.
If the large table is in a DMS tablespace, then you have the option to add the new storage to either existing filesystems that contain the large table's containers (actually the tablespace containing the large table) OR to create new filesystem followed by adding new containers. If not done judiciously, you may trigger a rebalance which can take a long long time. If done right, you can probably either extend the existing containers or add a new one without triggering the dreaded rebalance.
In short, you have do significant planning and work hand-in-hand with the sysadmin.
The easiest thing to do and the one that will likely have minimal overall effect is if you expand the filesystem(s) that has all the containers for the tablespace (for the large table).
Once you have done that, you will have to issue an "ALTER TABLESPACE EXTEND" for all containers.
Again here, you can do this in one sitting or do it in several sitting. As soon as you do that, DB2 will have to go out and claim all the space for each of the containers. This will have an affect on CPU and I/O , but very little memory. If you are doing this on a live/production system then I would suggest increasing all containers by 1 GB first and monitor. You can then take larger increments and move on based on your experience.
We too have some large databases - but we have created our tablespaces using ASM (Automatic Storage Management) and so we do not have these kind of issues (although the problem we have encountered recently is that you can shrink or resize containers in ASM).
I do not see a direct connection of memory or CPU to filesizes. I hope you are using range partitioning(VIPER) or union all view on the big table; you can mount that filesystem CIO to reduce file caching. File caching %, max read ahead(AIX term),spreading the filesystem across LUNS(if using a storage system),across controllers etc helps I/O.