Results 1 to 5 of 5

Thread: Concat string

  1. #1
    Join Date
    Jan 2004
    Location
    Sochaux (France)
    Posts
    55

    Unanswered: Concat string

    Hi,
    I've got a problem i would like to expose in order to have more ways to resolve it.

    I have to store a string which came from a select, and i have to concat this one.
    I have potentially 667 varchar2 (3), but i really use only 200 of them, but never at the same place.
    In the other place i put 3 spaces : ' '. I read the rank in a table.
    When i finished to read all fields, i concat them in one Varchar2 (2001) which i insert in a table.

    I used varray as i post yesterday here : http://www.dbforums.com/t979206.html

    But it costs a lot.
    So i don't know if i use the correct object to store my temporary string. Currently, i'm working with temporary table instead of varray.
    But the result is not very good.

    So, is there another way to store this string.
    Can you give me advice on this problem ?

    Thanx

  2. #2
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1

    Re: Concat string

    I would question why you are doing this in the first place. A 2001-character string comprising 667 3-character "fields" is too much for any human to read and digest I would think, so presumably you are not doing this to format output for a report, but merely to store the data. Why? Why not store each of the 200 non-null values in a separate row as a varchar2(3) value?

  3. #3
    Join Date
    Jan 2004
    Location
    Sochaux (France)
    Posts
    55
    I try to explain, but as you see, i'm not so good in english.
    We have to store 1107 differents 'attributs'. (667 + 200 + 100 + 100 + 40, due to five different kind)
    For the moment, we stored each 'attributs' in one row in a table (t1). So, for one product, we have 200 rows, because each prouct has, in average, 200 'attributs' in 1107 which exists.
    The problem we have, is that we store 17 500 000 products in a table (t2).
    And so, t1 has 200 * 17 500 000 = 3,3 billion of records and we encounter many problem with response time.

    So, we first made a table with the 17,5 millions of records and we add 980 columns in order to have a max of 'attributs', but each one in a different column.
    Response time were not very better.

    So we decided to put the 'attributs', grouped by kind in a single column, and make our request with substring on this big string.
    We already have a project having this kind of architecture, and it works very well.
    But they never have to initialize the database with old data, as i have to do.
    I hope i answer your question.
    I work in a huge private enterprise in France, and we are in a critical phase. this proble mobilize apporximately 30 persons. Oracle expert came to help us, but not specially on the recovery of our history, but on the parameters of the database.
    And my job on this project is to transform the 3,3 billion table on a 17,5 million table with this concatenation of 'attributs'


    Just for info, i finished my test with temporary table, and i duplicate by 10 the time of treatment.

  4. #4
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    The trouble is, you will go through all that effort and then probably find your performance is no better than with the 3.3 billion row table - or maybe worse! Perhaps it would have been better to stick with the 3.3 billion row table but cluster it with the products table.

    But as for your specific issue with arrays - I don't know I'm afraid.

  5. #5
    Join Date
    Jan 2004
    Location
    Sochaux (France)
    Posts
    55
    Originally posted by andrewst
    The trouble is, you will go through all that effort and then probably find your performance is no better than with the 3.3 billion row table - or maybe worse! Perhaps it would have been better to stick with the 3.3 billion row table but cluster it with the products table.

    But as for your specific issue with arrays - I don't know I'm afraid.
    For the moment, we initialized 4 partitions of 750 000 records in our new table, and, according to requests we execute, we have between 3 and 10 times divide our time response. So, our solutions seems to be not so bad.
    But the problem is that to initialize one partition, 750 000 records crossed with 150 million 'attributs', it takes 5 hours. And we have to intialize 28 partitions. So, i'm looking for help to optimize the load of each partition...

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •