Have one small test on 10.5 FP4 (insert 17.6M rows into two row-based and column-based staging tables, 70 columns, no data)
Use IMPORT from del file with commitcount 10K and 100K
In both cases have 3 time columntable insert slowly. (21-25 min vs. 61-63 min)
Also after insert's column table compression is poor (7.8Gb vs. 9.8Gb uncompressed, after LOAD this column table size is 980Mb and runtime 4 min's, UTIL_HEAP_SIZE=4GB)
Is there any possibility to speed up inserts' (and improve compression) for column organized tables ?
Seems that IMPORT doesn't create a table-level compression dictionary good enough using 100K rows sampling.
Try to test it with the dictionary created by LOAD using: load from /dev/null of del replace keepdictionary into mytable
Then try to import from the file again.
Another option may be to use the RESETDICTIONARYONLY and ROWCOUNT options of the LOAD command initially if you don't want to use the LOAD command for some reason.
You build the table-level dictionary without loading data on some "large enough" sample data from the file at the 1-st stage.
Then you can use import/ingest for the actual data loading.