Results 1 to 6 of 6
  1. #1
    Join Date
    Jan 2004
    Posts
    66

    Unanswered: Is there Parallel with block commit hint in Oracle

    Hi,

    Is there any hint in oracle 9i which will insert rows in parallel and commit rows automatically?
    I mean if I want to insert 1 million of data in a table ,
    can I do this with following option :

    Insert /*+ PARALLEL(A,4) */ into tab1
    select /*+ PARALLEL(B,4) * from tab2 ;

    Insert 50000 rows in parallel and commit those records.
    Is there any hint to tell Oracle to do this?
    himridul

  2. #2
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1

    Re: Is there Parallel with block commit hint in Oracle

    No, you would have to use PL/SQL and perform your own block commits.

  3. #3
    Join Date
    Jan 2004
    Posts
    66
    thanks ..... Presently I'm doing this thing in PLSQL . Including a field rownumber in the source table.

    Is there any hint which will avoid using ROLLBACK segment?
    I'm sure that that nothing is to be rollbacked from my query.

    As far I know ,APPEND and NOLOGGING optimizer hint will cause direct path load without redo log usage.

    and my 2nd question is:

    Is it possible to spool 80 millions of data from a table to a Unix flat file?
    If not possible , then Is there any solution for this ?

    Thanks ..
    himridul

  4. #4
    Join Date
    Feb 2004
    Posts
    108
    80 M ... !
    I've done with 30 millions. It did worked.
    Yours case should also work.

    But make sure that, in case your code crashes, when re-started, starts from the point onwards where it crashed.
    That is, if it crashes after 30 million rows then restarting the procedure will start from 30 million onwards.

    You may have to include some tracking "work". So it never re-does whatever is already done !

    You may also want to include some "Suicidal" code in your procedure. Like, if after processing 50 million, you want to kill the job.
    What is the purpose of dumping 80 M recs in flat file ? why no trying transportable tablespace ?

  5. #5
    Join Date
    Jan 2004
    Posts
    66
    Thanks Pinak....

    "You may have to include some tracking "work". So it never re-does whatever is already done !"

    How to do this ? using SAVEPOINT ?


    "What is the purpose of dumping 80 M recs in flat file ? why no trying transportable tablespace ?"

    client rquirements . They want those data into files.What can I do?
    what is transportable tablespace?
    himridul

  6. #6
    Join Date
    Feb 2004
    Posts
    108
    you'll need to track the process of dumping into flat file. Try running a sample code that does this for say, 10 million rows. See how long it takes. Estimate how long 80 M will take.
    Does it crashes in between ? If so, you'll have to restart the process. But this time you may not want to dump/process the record already processed.
    My reqs were bit different. I had to perform some insert/updates before dumping. So I kept a flag that I updated once that record was processed. This helped me in ignoring that record when i re-started the job.

    If your reqs. is just selecting and dumping, then in worst case you can always dump in sub-sets of 'N' millions. and finally append/merge those files


    http://download-west.oracle.com/docs...paces.htm#9310

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •