Results 1 to 2 of 2

Thread: BLU inserts

  1. #1
    Join Date
    Sep 2007

    Unanswered: BLU inserts


    Have one small test on 10.5 FP4 (insert 17.6M rows into two row-based and column-based staging tables, 70 columns, no data)
    Use IMPORT from del file with commitcount 10K and 100K

    In both cases have 3 time columntable insert slowly. (21-25 min vs. 61-63 min)
    Also after insert's column table compression is poor (7.8Gb vs. 9.8Gb uncompressed, after LOAD this column table size is 980Mb and runtime 4 min's, UTIL_HEAP_SIZE=4GB)

    Is there any possibility to speed up inserts' (and improve compression) for column organized tables ?


  2. #2
    Join Date
    Jul 2013
    Moscow, Russia
    Provided Answers: 55

    Seems that IMPORT doesn't create a table-level compression dictionary good enough using 100K rows sampling.
    Try to test it with the dictionary created by LOAD using:
    load from /dev/null of del replace keepdictionary into mytable
    Then try to import from the file again.

    Another option may be to use the RESETDICTIONARYONLY and ROWCOUNT options of the LOAD command initially if you don't want to use the LOAD command for some reason.
    You build the table-level dictionary without loading data on some "large enough" sample data from the file at the 1-st stage.
    Then you can use import/ingest for the actual data loading.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts