We recently upgraded one of our systems from SCO 5.0.5 to 5.0.6 and in an attempt to increase our system performance we went from using Raid Level 1 to Raid Level 10 and at the same time went from using 9 gig 10000 rpm drives to 18 gig 15000 rpm drives. In theory, this sounds like it would increase the speed of our system, but in reality we have a bottleneck issue and when performing any query our wio increases to 90 to 100% leaving our system at zero percent idle. We ran an update statistics and this seemed to fix our problem. However, we schedule a system reboot every Monday night through cron and Tuesday morning after the reboot, we were right back to the same problem. Before we made these changes, we did not have the bottleneck issue. The systems we are using are Dell PowerEdge 4400 servers, single processor ( 1 GHZ ), split backplane, PERC 3 Raid controller card, 512 mb of memory and we are using Informix 7.31. Is it a fluke that this problem returned after the reboot and also if anyone has any suggestions on changes that we should have made to our onconfig file after this upgrade we would really appreciate it. I'm wondering if there is something that we perhaps should have changed but didn't. Any and all suggestions are much appreciated.
How many 10000 rpm and 15000 rpm disks which you use ?
RAID 10 is mead RAID1 + RAID0 ( RAID 1 is mirroring, RAID 0 is striping)
we use RAID 1 to make our data more secure and we use RAID 0 to increase system's speed. Hence, RAID 10 is the best solution to increase speed and make data more secure.
As I know, Harddisk which used to make RAID must be the same size and same model,
we cannot use 10000 rpm 9 Gb to make RAID configuration with 15000 rpm 18 Gb Disk,
PERC 3 Raid configuration should not allow you to do this.
Send me your RAID configuration, I will review it for you..
Thanks Chulapat. We're using all 18 GB 15000 rpm drives ( 8 drives to be exact ). Yes, I did find that you can not mix-match drives. You are correct, that would definately cause problems. How we have it set up is root is using raid 1. The main root drive is on channel 1 and mirrored to the first drive on channel 2. We have the database striped accross 3 drives on channel 1 and that striped set is mirrored to 3 striped drives on channel 2. So, to sum up we have 8 physical drives, but 2 logical drives. Root being the first logical drive and the database being the second. We have caching for the database turned on and we're using "write-thru" as opposed to "write-back" and "read-ahead" as opposed to "adaptive" or "non-readahead". Write-back will increase speed, but isn't as secure as write-thru. I'm thinking that read-ahead may help increase our speed a bit and I know caching has made a huge difference. I'll have to test it in one of my stores to really find out what affect these changes I've made really have. I'm installing 7 new systems with this new configuration next week so I should know something soon. Please tell me what your thoughts are on this configuration and if there's anything you would make different. You seem to know a lot about Raid and I appreciate your response to my question and maybe there's something I'm missing or haven't thought of that you may know more about.