We have a very big table (100+ columns) that can grow upto 50million records at least. There is a indicator column in the table, which is used to show the latest positioning within the system. Now the application users / agents connects to this table and retrieve indicators. The issue here is each time when the table gets inserted/updated with new data (pretty much all the time) - app user / agent is forced to read from it resulting in a select query fired on the database.
From a good performance view point, few things that I can imagine are:
- Define index and make sure query is optimized to read from the index.(Optimizer may or may not choose to use it though)
- Save the query as a stored procedure + Use bind variables.
- Table Partitioning. (Range to begin with)
- Temp tables/views (if necessary).
- Apply filters as necessary.
Are there any other ways to improve performance? Eg: Database Caching.
Database were born to be queried, I woulnd't worry too much about some processes firing querys all over against it. I would rather worry on what your performance view points are. Also, for performance, consider getting large memory and fast disks. There's so much that you can do on either Oracle and Solaris (for example, set up raw devices to store the data avoiding the OS overhead with native filesystems -inode maintenance, os buffer, etc.-), but since there's not too much more detail about this, I can't say no more.