Results 1 to 3 of 3
  1. #1
    Join Date
    Aug 2003
    Posts
    41

    Exclamation Unanswered: Insert Into .. Select ..

    Hi,

    I'm trying to copy a few huge tables. I cant do export - import due to some restriction. So, i chose to use the Insert into.. Select command. But the problem is, since the table is huge, huge rollback segments are generated.

    Is it possible to issue a COMMIT between INSERT INTO <tab1> SELECT FROM <tab2> command to avoid this?

    -----------------
    For the same problem, i created a huge rollback segment and issued a commit at the end. and it worked. I'm using optimal parameter for rollback segment, but it did not shrink after this huge insert. So, when i try to copy the second table, instead of releasing and using the same rollback segments, rollback segments started to grow again.

    Any ideas. I'm on Oracle 817 on NT.

    thanks,
    Sunil

  2. #2
    Join Date
    Jul 2003
    Posts
    2,296
    Why don't you export the data comma-delimited (or deliminate any other way you want) and then use sql*loader to load the data again.


    Why can't you use Export/Import????
    What error do you get?


    If you have TOAD, this would be pretty easy to do with that application.
    - The_Duck
    you can lead someone to something but they will never learn anything ...

  3. #3
    Join Date
    Aug 2003
    Location
    SW Ohio
    Posts
    198

    Arrow

    Just my thinking.......

    Take a look at the data, and see how it may group out. Maybe by date, month year, transaction number, user names, you may even have to do subst(<field>,1,1) to get smaller data groupings. Do a grouping queries with counts (example below).
    ------------------------------------------
    select FISCAL_YEAR, COUNT(FISCAL_YEAR) AS ROW_COUNT
    FROM jrnl_header
    GROUP BY FISCAL_YEAR;

    FISCAL_YEAR| ROW_COUNT
    -----------|----------
    1997| 6
    1998| 220
    1999| 6546
    2000| 15720
    2001| 16542
    2002| 24768
    2003| 5895

    7 rows selected.
    ------------------------

    And remember there is no shame in doing smaller statements if need be such as:
    ---------------------------
    select FISCAL_YEAR, ACCOUNTING_PERIOD, COUNT(FISCAL_YEAR) AS ROW_COUNT
    FROM jrnl_header
    GROUP BY FISCAL_YEAR, ACCOUNTING_PERIOD;

    FISCAL_YEAR|ACCOUNTING_PERIOD| ROW_COUNT
    -----------|-----------------|----------
    1997| 12| 6
    1998| 1| 15
    ...
    2003| 7| 1
    2003| 8| 5

    70 rows selected.
    ---------------------------------------

    Once you figure out how to break out your data, do multiple
    INSERT INTO statements.

    In the case above it would be:
    ------------------------

    INSERT INTO New_jrnl_header SELECT * FROM jrnl_header
    WHERE fiscal_year < 2000;
    COMMIT;
    INSERT INTO New_jrnl_header SELECT * FROM jrnl_header
    WHERE fiscal_year = 2000;
    COMMIT;
    INSERT INTO New_jrnl_header SELECT * FROM jrnl_header
    WHERE fiscal_year = 2001;
    COMMIT;
    ----------------------
    and repeat as needed. You may want to keep this exercise in mind when you decide to start partitioning tables. The same thing for grouping data is needed when you look at doing partitions.

    As far as the rolback segments, I ran into the same problem when I went to build my partitioned tables. I found the easiest method is just to recreate them when I was done.
    ----------------------------
    ALTER ROLLBACK SEGMENT "R08" OFFLINE;
    DROP ROLLBACK SEGMENT "R08";
    CREATE ROLLBACK SEGMENT "R08" TABLESPACE "PSRBS" STORAGE ( INITIAL 1M NEXT 1M OPTIMAL 2M MINEXTENTS 2 MAXEXTENTS 50);
    ALTER ROLLBACK SEGMENT "R08" ONLINE;

    ALTER ROLLBACK SEGMENT "RBSSUPERBIG" OFFLINE;
    DROP ROLLBACK SEGMENT "RBSSUPERBIG";
    CREATE ROLLBACK SEGMENT "RBSSUPERBIG" TABLESPACE "PSRBS" STORAGE ( INITIAL 50M NEXT 50M OPTIMAL 200M MAXEXTENTS 10);
    ALTER ROLLBACK SEGMENT "RBSSUPERBIG" ONLINE;
    -------------------------------
    And since I was usually late night doing this, I would alter the smaller rollback segments offline, so that I didn't run into the maxextents problem, if a query was bigger.

    I know this probably is not the most efficient way of doing it, but as your database grows, you will have to look at these issues at some point with partitioning. To give you an idea of what partitioning can do for you, try this. When I broke a 7,787,268 row table up into monthly chunks the access time to individual rows wen from 10 to 70 msec. But the selection of rows over a two month period went from 1000+msec to <150msec.

    Any other questions, just post them.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •