80 M ... !
I've done with 30 millions. It did worked.
Yours case should also work.
But make sure that, in case your code crashes, when re-started, starts from the point onwards where it crashed.
That is, if it crashes after 30 million rows then restarting the procedure will start from 30 million onwards.
You may have to include some tracking "work". So it never re-does whatever is already done !
You may also want to include some "Suicidal" code in your procedure. Like, if after processing 50 million, you want to kill the job.
What is the purpose of dumping 80 M recs in flat file ? why no trying transportable tablespace ?
you'll need to track the process of dumping into flat file. Try running a sample code that does this for say, 10 million rows. See how long it takes. Estimate how long 80 M will take.
Does it crashes in between ? If so, you'll have to restart the process. But this time you may not want to dump/process the record already processed.
My reqs were bit different. I had to perform some insert/updates before dumping. So I kept a flag that I updated once that record was processed. This helped me in ignoring that record when i re-started the job.
If your reqs. is just selecting and dumping, then in worst case you can always dump in sub-sets of 'N' millions. and finally append/merge those files