* First of all choose the right db_block size. During performance tests I've found, that 8k are good for most applications and 16k will be only a little bit better. I've not found any reasons not to choose a "large" db_block size.
* Systems with mass inserts or inserts of large amount of data will have benefits form the following things
- large online redo logfiles (up to 128M)
- Tables for mass inserts, updates and deletes should be stored in RAW partitions
RAW partitions make mass insert, update and delete more than 20 times faster.
A very interesting fact for getting a system that answers as fast as posible is the design of the tablespaces. The creation parameters of tablespaces depend on the objects stored in the tablespace.
Large objects need tablespaces with large extents and small objects smaller extent sizes. The extent size should be one of 128K,4M and 128M. You should not choose a PCTINCREASE other than 0 percent! Because increasing extents will result in fragmentation!
Systems with large amount of data from roundabout 8GB will perform 10 to 15 times with the right tablespace settings and distribution of database objects!
Some parameters of the init<sid>.ora are very interesting when trying to increase the database performance.
First of all DELETE statements depend on the size of the log_buffer value in init<sid>.ora. All values you choose should be divideable by 1024 without a rest.
will perform well. The maximum is in my opinion system dependent.
Most parameters of the init<sid>.ora can be changed with a little action.
Some design problem like the distribution of database objects (tables, indexes, procedures) in a database is more difficult to handle by a DBA.
So design your tablespaces in a manner that fits to your objects. Choose the correct extent size, because database with fragmentation or a large number of extents in different tablespace do not perform well!
Store tables with a small amount of data (less than 128M) in tablespaces with 128K extents. Store Tables larger but not larger as 4G in tablespaces with a extent size of 4M and store the all the rest of your data in tablespaces with 128M extents.
Your datamodellimg software or scripts must store your objects (tables, indexes, procedures) in the right place!
Don't choose only one tablespace for all objects of your software. This looks probably to you like the easiest way but when your software perform well and the system grow up you will see that this is not the best way to design an Oracle based application!
Last but not least performance depends on hardware. In my opinion Oracle can perform any time at any hardware better. Often the settings are wrong, the individual code ist bad and so on.
Hardware scales linear and software settings and code changes often scale exponential. But nevertheless performance depends on hardware.
Don't use software mirroring off partitions/slices too much. Reduce it to the minimum. Use RAID0 wherever your can because oracle perform well on RAID0 systems but be sure to have good backups aber safe redo- and controlfiles and safe archived redo logs. Mirror the archived redo logs, because they can help you restoring a database.
Spread the harddrives at different controlers and use a high number of SCSI/FC/IDE controllers. One controller for one RAID0 array seem to be the best.
Use the RAM by increasing DB_BLOCK_BUFFERS and increase this way the SGA size. Increase shared_pool_size, sort_area_size and so on.
I hope some of these facts are interesting and new to you.
I've written that article because I have very often to change bad performing systems into good performing systems and I think the better way is to design a datamodel that it runs well out of the box!