Results 1 to 7 of 7
  1. #1
    Join Date
    Dec 2003
    Location
    Noida, India.
    Posts
    171

    Unanswered: Handling huge volume of data.

    Hi,
    We have a requirement in which data from 5 tables (taken by join) is to be loaded into 1 table.

    The volume of data to be loaded is huge.

    The logic being used is
    - cursor is selecting from 5 tables.
    - using "bulk collect" option, loaded into a table variable.
    - using "forall" option, inserted all in destination table.
    - used save exceptions to ensure that errorenous records are skipped.

    The problem being faced is when a record gives error, need to insert error into an error_log table. Can achieve this by using SQL%BULK_EXCEPTIONS. But what I am lacking is, identification of record which failed.

    When administrator looks at error_log table, he will not be able to identify which record failed. Need to save the PK of the failed record also in error_log table. BULK_EXCEPTIONS.error_index just provides the index in that cursor, which does not help the administrator much.

    Tried the "returning" clause. But this captured only the successfully inserted records, not the failed ones.

    Any suggestions on this will be highly appreciated.

    Thanks.

    The sample code I am using is
    Code:
    declare
        TYPE tab_type_loan IS TABLE OF NUMBER(10);
        a_data tab_type_loan;
        a_err tab_type_loan;
        dml_errors exception;
        PRAGMA exception_init(dml_errors, -24381);
    begin
    
        a_data := tab_type_loan(10,20,30,40,0,50);
        
        FORALL i in 1..a_data.count save exceptions
            insert into seema_test values (100/a_data(i))
                returning a_data(i) bulk collect into a_err;
        commit;
        
    exception
        when dml_errors then
        dbms_output.put_line('----------------');
        dbms_output.put_line('Total number of errors:' || SQL%bulk_exceptions.count);
        for i in 1..SQL%bulk_exceptions.count loop
            dbms_output.put_line(SQL%bulk_exceptions(i).error_index);
            dbms_output.put_line(SQL%bulk_exceptions(i).error_code);
        end loop;   
        dbms_output.put_line('----------------');
        dbms_output.put_line('Total number of errors:' || a_err.count);
        for i in 1..a_err.count loop
            dbms_output.put_line('value:' || a_err(i));
        end loop;
        
    end;
    /
    and the result is:

    ----------------
    Total number of errors:1
    5
    1476
    ----------------
    Total number of errors:5
    value:10
    value:20
    value:30
    value:40
    value:50

    PL/SQL procedure successfully completed.
    Oracle can do wonders !

  2. #2
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    The administrator will not know what the index values mean - but your code does! So instead of just writing the error_index value to the output, write the data values from that element of the array:

    dbms_output.put_line(a_data(SQL%bulk_exceptions(i) .error_index));

  3. #3
    Join Date
    Dec 2003
    Location
    Noida, India.
    Posts
    171
    Thanks Tony!!
    Oracle can do wonders !

  4. #4
    Join Date
    Dec 2003
    Location
    Noida, India.
    Posts
    171
    In continuation of above....

    The requirement was changed so that we dont require the bad rows. We have therefore changed the code so as to do

    insert /*+APPEND*/ into XXX select from YYY

    The select query above selects data for 3 different conditions. The query thereby uses a 'union all' operator with 3 queries.

    The primary key of destination table is a sequence. the obvious modification in query was

    select seq.nextval, a,b,c from AAA
    union all
    select seq.nextval, x,y,z from BBB
    union all
    select seq.nextval, ...

    Now, oracle does not permit use of sequence in union queries. (verison 9i)

    So, created a function that returns se.nextval and used this function in query.

    Which is better option :
    - use of function that returns seq next value
    - use of 3 different inserts (since the operator used is ' union all' and not 'union', there is no issue in splitting into 3 queries). Each insert can use seq.nextval.

    Or any other better approach?
    Regards,
    Oracle can do wonders !

  5. #5
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    I would go for the 3 separate inserts: calling PL/SQL functions from SQL is notoriously slow.

  6. #6
    Join Date
    Dec 2003
    Location
    Noida, India.
    Posts
    171
    Using APPEND hint, we do see a tremendous increase in performance of inserts. Does it have any affect on "select" on this table?
    Oracle can do wonders !

  7. #7
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    Not usually, when you are loading large volumes of data. The difference is that all the data is appended at the "end" of the table, so any space freed up by deletes is not used during the process. In an extreme case, if you delete a lot of data between each load then your table could get physically bigger and bigger even though the total number of rows isn't growing. This would cause full table scans to get progressively slower. Of course, if all data is deleted between loads then you should be using TRUNCATE not DELETE anyway, which will solve that problem.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •