I need to make a decision for a performance issue on dealing with huge dataset, say 10M, or even 100M, we want to avoid reading all the data
into memory and then do the processing, it'll cause "out of memory" error.

Using JDBC, currently we have 2 ways:

1. using disk file, we save the data into disk file first, and then read them
trunk by trunk to do processing

2. keep the ResultSet open, and then fetch the data in trunk by trunk
to do the processing

For now, I don't think we need to go backward. Even suppose we do, for solution 1, we can using RandomAccessFile. For 2, we may make some implemetation to make a Scrollable ResultSet.

So, could anybody suggest me which way is better?