Unanswered: Oracle and Storage Area Networks (SAN)...I need insight
We have a SunFire V480 that we are going to be connecting to our SAN. We all agree that putting these huge data files on the SAN is the way to go.
But we're stumped on decided what to do with the binaries. Should we put it also on the SAN. We heard of others doing it.
1.) I'd like to get a better picture of how this works. You guys can validate some of the preconception I will include in this thread.
2.) I'd like to find out more of some of the benefits, and perhaps caveats, of doing this. (for now, lets say money is not an issue).
I am fairly new to SAN and getting rusty on SUN.
Here's my (simplistic) idea of how this works. If the SUN box was conencted to the SAN, we would simply mount the file systems on the SAN that's been allocated to the box (just as any file system...right?) So if the binaries are also on the SAN, we would simply be mounting those file systems as well and be able to boot Oracle from the SUN box. (Please confirm)
But if we have the binaries on the SAN, can other SUN boxes use the same binaries to host Oracle?
If so, then I could see that when it comes time to upgrading/patching...things are centralized! This would seen to be a great benefit.
Could you guys who have experience in this or INSIGHT tell me things I need to consider before moving forward to implementing this.
I think that sometimes the configurations are so specific that you won't necessarily find an answer here or on the web/google anywhere.
I have no experience of the SAN you describe, or of the config you describe but I have worked at a number of sites running large SAN's.
The impression I get is that given enough bandwidth between the SAN and the Oracle server, that the SAN is best left to purely deliver storage requirements, and Oracle left to run on another box. Don't quote me on it though.
What will invariably happen is that once people see the benefit of the SAN, the CPU (or cluster of CPU's controlling the SAN) will be given the extra job of print spooling 1Gb PDF's etc :-( [Honest, you'd be shocked].
I really do not wish to sound negative (I'm trying to be helpful!) but given your mention of budget not being an issue, I think the only way you will find out for sure is to try it. Plug the SAN into a dev server (or production copy) and emulate the type of workload it gets.
I would imagine there is software already available to emulate live software activity (think WinRunner but designed for Oracle) which could allow you to test the SAN on a DEV instance. If you can't find any, talk to me nicely and maybe I'll write something which can watch a production instance and emulate the activity on a DEV instance :-) (Shameless plug there!).
Moving through the rest of your post, whether or not a number of database instance can share the same binaries - yes I believe so. It happens often, two instances on one machine is quite common. You would need to check regarding the specifics of your OS though (which is what I guess you're doing by posting here so I'll shut up now).
"The impression I get is that given enough bandwidth between the SAN and the Oracle server, that the SAN is best left to purely deliver storage requirements, and Oracle left to run on another box. Don't quote me on it though."
But seriously. I hope you understand that what I intend to do IS to run Oracle on another box. The SAN would simply host the binaries files and data files for Oracle. All Oracle processes would be served on the a separate box from the SAN.
Just wanted to make sure that was understood.
But thanks for the feedback. It's been tough to get any for this topic.
The advantage of using a SAN for datafiles, ORACLE_HOME and config files is if your oracle server blows up you can grab a new server (providing the OS is configured the same way), make sure it has the same hostname/IP as the old box and voila your database is back : ). This does imply your redo logs are on the SAN though aswell which may cause performance issues if you have bandwidth/latency issues.
Yep bandwidth (& latency) are key and you need to ensure that not only the theoretical bandwaidth is enough but also actual bandwidth over the day, i.e. you need good monitoring tools. So if there is some backup job running at midnight chewing up bandwidth then that might seriously degrade your database performance.
Oh one thing I forgot to mention is if you have a network switch in front of the database server which can do NAT then you can have a set of servers which all appear to have one IP address at the app server end (obviously only one server is active). Then if that server goes down you bring up oracle on another server, get your switch to point to the new server and your service is back up.
You could have different spec servers each with its own init.ora tuned for that specific server. So you could have 2 CPU box and 4 CPU box. When your doing large data loads you might switch to the 4 CPU box but otherwise you would use the smaller server. Of course switching servers does involve downtime but for most oracle midsized databases your talking about a few minutes.