Results 1 to 2 of 2
  1. #1
    Join Date
    Jan 2002

    Unanswered: Deleting 10 millions records from 90 million

    I have 2 big tables in a high transaction system. It contins 9 months of data [90 million]. Each table have around 40 columns [ The total datafiles allocated is 30 GB for both tables and 40 GB for the indexes]. Table is not partitioned. Now I have only 4 GB left in the Server. I Want to delete first month data [10 millions]. I cannot go by delete for a sample of 300 thousands it took 3 hrs.

    Because of the non-availabilty of diskspace I can't use create table as to move to a temp table and truncate the table and then recreate the table from temp table.

    Any suggestions?
    Is there is any possibility to export a logical backup and import the selected data with some filter condition.

  2. #2
    Join Date
    Sep 2003
    Milan, Italy
    You can use the QUERY option of EXP to select only the rows you want to keep, drop the table, and reimport it via IMP.

    Stange anyway that it takes so long to delete some rows; perhaps there are too many indexes, or the cbo is using one of them inefficiently. Or, do you have unindexed FK pointing to the table ?

    One possibility could be to drop/set unusable the indexes, perform the delete, and recreate/rebuild them. I would investigate it, never done it myself.

    I would seriously consider partitioning the table - next month you will be in the same situation ...


Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts