I have a table having 30 fields and more than 50 million row,
in one of the reuirement i have to fetch the rows based on date range.
This query is taking more than half an our, please suggest,
how do i improve the performace ??
To give an correct extent size, give me your table structure, but, I insist, use an index in that fields, evaluate how many time take: create an index before your select run and your select run, versus original time to select run.
Given your extent size I'd guess that that large a table is fragmented amoung hundreds of extents. If you're going to unload and reload the table you need to make your initial extent size large enough to hold the entire table or an entire fragment (if fragmenting). Next extents should then be a percent based upon expected growth - usually 25% if the table is going to grow at all. The game, if you can play it, is to have the table and a years worth of growth in <10 extents.
Next question, you are fragmenting, why? With no index all you're accomplishing is to have multiple scans going that are walking through every record looking for your date criteria. Unless you've laid out your fragments on disk perfectly, you're just thrashing against each other.
I think you need to rethink how this table is structured vs. queried.