Results 1 to 9 of 9
  1. #1
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5

    Unanswered: decompose an insert???

    Hey all,
    working on z/os DB2 V10 and LUW multiple versions. I am attempting a common comment table. You know how we all have different tables with comments and most of the column is unused, but we have it defined as 1k,2k,4k bytes or more. Then we have the developers coming over saying they need the size of this comment column increased to x.
    Here is my plan. I have a comment table, contains:
    table name
    Comment type
    seq nbr
    parent_key(string field)
    comment(250 bytes)
    This will allow to have as long as you could ever want for a comment, stored as individual rows 1 - n. I have written a query, probably implement as a view, that will select all the rows for a particular parent key and give back a single row of comment column(i.e.string it all together).
    I am having difficulty with the INSERT portion. I am trying to set it up, so that they just insert into y a single row with whatever length comment they have, Then I have an instead of trigger on the table that would break that long string into the 250 byte strings. I know how to string rows together and how to unstring a row into rows.
    I am having difficulty with performing n inserts with this long string, froma single insert. Was wondering if any may have an idea.

    Thanks.
    Dave

  2. #2
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Adjust the VARCHAR length of foo and comment as suits your needs:
    Code:
    ; WITH cte (key, seq, foo) AS (
    SELECT key, seq, Cast(comment AS VARCHAR(3999)) AS foo
       FROM comment_table
       WHERE  1 = seq
    UNION ALL SELECT key, seq, Cast(comment || foo AS VARCHAR(3999)) AS foo
       FROM cte
       JOIN comment_table
          ON (comment_table.key = cte.key
          AND comment_table.seq = 1 + cte.seq)
    )
    SELECT Max(foo) AS Comment
       GROUP BY key;
    You may also only want one specific key/comment pair, which you can set by a WHERE clause in the final SELECT statement.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  3. #3
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Note that DB2 is pretty good at handling variable length strings, especially when they are sparse, repeated, or both. You may well be "gilding the lily" and doing a bunch of needless work and obfuscation by pursuing this option.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  4. #4
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5
    Pat,
    I think you missed my problem. The problem I am having is how in an instead of trigger would I get it to fire n inserts? Like I said I can break my one long statement into chunks without issue, I can put all my chunks into one string. I was attempting to do a behind the scenes breaking into chunks with an INSTEAD OF TRIGGER, but I think I will not be able to do such, it may have to be done via a stored proc or maybe a udf.
    Dave

  5. #5
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Ooops, sorry... You are correct, I completely missed the point that you were trying to write an instead of trigger.

    I've never actually tried to do that, but I'm pretty sure that it would just be a case of the reverse of the cte that I described... Use the OLD reference to get the column value, split it into N byte pieces, and insert those pieces into the comment table. The INSERT operation ought to be easy. The update might be a bigger challenge if there were either more or fewer segments than there were before the update... I might chicken out and delete the whole lot and re-insert them just to be sure..

    As I wrote before, I get the feeling that this is a case of gilding the lily... DB2 does a pretty good job of handling strings itself, so I'm not sure that you can gain enough to make this worth the effort.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  6. #6
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    @Dave: what do you gain from your multi-row approach that cannot be implemented with a single VARCHAR(32500) column (even if you decide to use a single comments table across the entire application)?

    Even if you're asked to accommodate longer comments, you can use inlined CLOBs -- I'm sure in 99.9% of cases the comments will fit in the rows thus avoiding the performance penalty.
    ---
    "It does not work" is not a valid problem statement.

  7. #7
    Join Date
    Jul 2013
    Location
    Moscow, Russia
    Posts
    666
    Provided Answers: 55
    Hi Dave,

    try this:

    Code:
    create table comment_tab (key int not null, seq int not null, chunk varchar(10), primary key(key, seq));
    
    create view comment_tab_v as 
    select key, cast(listagg(chunk) within group (order by seq) as clob(5k)) str
    from comment_tab 
    group by key;
    
    create trigger comment_tab_v_iir
    instead of insert
    on comment_tab_v
    referencing new as n
    for each row
    insert into comment_tab
    with a(seq, start_pos, len) as (
    values (1, 1, min(length(n.str), 10))
      union all
    select a.seq+1, a.start_pos + a.len, min(length(n.str) - (a.start_pos + a.len) + 1, 10)
    from a
    where min(length(n.str) - (a.start_pos + a.len) + 1, 10) > 0
    )
    select n.key, a.seq, substr(n.str, a.start_pos, a.len) chunk
    from a;
    
    grant insert on table comment_tab_v to user your_user_name;
    
    insert into comment_tab_v values (1, 'my very long string which is longer than 10 symbols');
    
    select * from comment_tab_v where key=1;
    select * from comment_tab where key=1;
    This will not work on non-ascii characters and we don't have the listagg function on z/os, so you have to use some RCTE like you provided (or XMLAGG) for the view on db2 for z/os.
    Regards,
    Mark.

  8. #8
    Join Date
    Dec 2007
    Location
    Richmond, VA
    Posts
    1,328
    Provided Answers: 5
    Mark,
    Thanks. That was the part I couldn't wrap my head around yesterday. I was thinking more along the lines of how I am going to perform n inserts.


    @Pat,
    Not after performance gains or disk savings, more my time savings. From here out, we will not have to add a comment field, make a comment field larger, etc... Been a lot of these being added and increased lately as more after the customer experience tracking and surveys in the business line right now.

    @nick,
    We had thought about that, but with something like 95% of these comments being smaller, we decided to go with the average and then do the multiple rows when they need it.
    Dave

  9. #9
    Join Date
    Jun 2003
    Location
    Toronto, Canada
    Posts
    5,516
    Provided Answers: 1
    With 95% of comments being smaller (than what?), it is a perfect case for a VARCHAR (if the limit is under 32K bytes) or an inlined CLOB column. No application changes, no views, no triggers, no performance degradation.
    ---
    "It does not work" is not a valid problem statement.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •