Hi all. Just set up a 300 gig raid 10 array that I want to use with an informix database with several HUGE tables that are frequently joined and I am wondering what my best chunk strategy would be? Should I chunk out the 300 gig in smaller chunks or since I am running 9.40 would there be any value in creating one huge chunk?
Multiple smaller chunks, because IDS internally still behaves as if 1 chunk is 1 disk. It will do I/O on chunk base. So to get parallel I/O create multiple chunks.
If the tables are really big, I would still recommend multiple dbspaces and fragmented tables. This way IDS can even do more parallelism and maybe even fragment elimination. Of course it depend of the chosen fragmentation strategy and the data usage.
Normally 2 to 4 chunks for every physical disk is optimal, even if RAID10 is used.
But as I said this is just my opinion and a lot of people have their own ideas about this.