Results 1 to 15 of 15
  1. #1
    Join Date
    Dec 2012
    Location
    Logan, Utah
    Posts
    163

    Unanswered: Update with Case

    I am trying to update a parent table with a summation of its child records. The child records are being deleted because the transaction has become invalid because payment was made with a bad check or there was a posting error. So a rollback of sorts is required.

    Here are is the DDL for the tables and DML for the data:

    Code:
    DECLARE @t1 TABLE  
    (
    	[Year] int NOT NULL,
    	[Parcel] varchar(13) NOT NULL,
    	[InterestDateTime] datetime NULL,
    	[Principal] decimal(12, 2) NULL,
    	[Penalty] decimal(12, 2) NULL,
    	[Interest] decimal(12, 2) NULL
    )
    
    
    declare @t2 TABLE
    (
    	[Year] int NOT NULL,
    	[Parcel] varchar(13) NOT NULL,
    	[ReceiptNumber] int NOT NULL,
    	[TransDateTime] datetime NOT NULL,
    	[Payment] decimal(13, 2) NOT NULL,
    	[TransType] char(1) NOT NULL
    )
    
    INSERT @t1 (Year, Parcel, InterestDateTime, Principal, Penalty, Interest)
       VALUES
    (2012,'0000001809','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2011,'0000001809','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2010,'0000001809','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2009,'0000001809','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2012,'000001808X','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2011,'000001808X','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2010,'000001808X','2013-03-14 00:00:00.000',0.00,0.00,0.00),
    (2009,'000001808X','2013-03-14 00:00:00.000',39.23,3.00,0.00)
    
    INSERT @t2 (Year, Parcel, ReceiptNumber, TransDateTime, Payment, TransType)
       VALUES
    (2012,'0000001809',135339,'2013-03-14 15:53:20.000',3.14,'I'),
    (2012,'0000001809',135339,'2013-03-14 15:53:20.000',10.00,'P'),
    (2012,'0000001809',135339,'2013-03-14 15:53:20.000',211.15,'R'),
    (2012,'000001808X',135339,'2013-03-14 15:53:20.000',0.06,'I'),
    (2012,'000001808X',135339,'2013-03-14 15:53:20.000',9.65,'R'),
    (2011,'0000001809',135339,'2013-03-14 15:53:20.000',18.20,'I'),
    (2011,'0000001809',135339,'2013-03-14 15:53:20.000',10.00,'P'),
    (2011,'0000001809',135339,'2013-03-14 15:53:20.000',206.12,'R'),
    (2011,'000001808X',135339,'2013-03-14 15:53:20.000',32.73,'I'),
    (2011,'000001808X',135339,'2013-03-14 15:53:20.000',10.00,'P'),
    (2011,'000001808X',135339,'2013-03-14 15:53:20.000',378.70,'R'),
    (2010,'0000001809',135339,'2013-03-14 15:53:20.000',35.37,'I'),
    (2010,'0000001809',135339,'2013-03-14 15:53:20.000',10.00,'P'),
    (2010,'0000001809',135339,'2013-03-14 15:53:20.000',219.41,'R'),
    (2009,'000001808X',135339,'2013-03-14 15:53:20.000',42.74,'I'),
    (2009,'000001808X',135339,'2013-03-14 15:53:20.000',302.73,'R'),
    (2009,'000001808X',135339,'2013-03-14 15:53:20.000',100.00,'I')
    I tried to use a Merge statement with an ON MATCH for each TransType, but it complained that I could not have multiple update statements. OK. So I tried a MERGE with single update statement with a case and it complained that I was updating the same parent multiple times, which I was and want to!

    So, I tried the following update statement and it still does not work, though no error message.

    Code:
    update t1 set
       t1.Principal = t1.Principal + (case when t2.TransType = 'R' then t2.Payment else 0 end),
       t1.Penalty   = t1.Penalty   + (case when t2.TransType = 'P' then t2.Payment else 0 end),
       t1.Interest  = t1.Interest  + (case when t2.TransType = 'I' then t2.Payment else 0 end)
     from @t1 t1
      inner join @t2 t2 on t2.YEAR = t1.YEAR and t2.Parcel = t1.Parcel
    I am expecting the following after the update:

    Code:
    Select * from @t1
    
    2012	0000001809	2013-03-14 00:00:00.000   211.15  10.00    3.14
    2011	0000001809	2013-03-14 00:00:00.000   206.12  10.00   18.20
    2010	0000001809	2013-03-14 00:00:00.000   219.41  10.00   35.37
    2009	0000001809	2013-03-14 00:00:00.000     0.00   0.00    0.00
    2012	000001808X	2013-03-14 00:00:00.000     9.65   0.00    0.06
    2011	000001808X	2013-03-14 00:00:00.000   378.70  10.00   32.73
    2010	000001808X	2013-03-14 00:00:00.000     0.00   0.00    0.00
    2009	000001808X	2013-03-14 00:00:00.000   341.96   3.00  142.74
    All I am getting are the original values.

    Any thoughts or ideas?

  2. #2
    Join Date
    Jan 2007
    Location
    UK
    Posts
    11,434
    Provided Answers: 10
    Code:
    ; WITH t2_summary AS (
      SELECT "Year"
           , Parcel
           , Sum(CASE WHEN TransType = 'R' THEN Payment END) As Principal
           , Sum(CASE WHEN TransType = 'P' THEN Payment END) As Penalty
           , Sum(CASE WHEN TransType = 'I' THEN Payment END) As Interest
      FROM   @t2
      GROUP
          BY "Year"
           , Parcel
    )
    UPDATE t1
    SET    Principal = t1.Principal + t2_summary.Principal
         , Penalty   = t1.Penalty   + t2_summary.Penalty
         , Interest  = t1.Interest  + t2_summary.Interest
    FROM   @t1 As t1
     INNER
      JOIN t2_summary
        ON t2_summary."Year" = t1."Year"
       AND t2_summary.Parcel = t1.Parcel
    
       
    SELECT *
    FROM   @t1
    George
    Home | Blog

  3. #3
    Join Date
    Dec 2012
    Location
    Logan, Utah
    Posts
    163
    Thanks for the help.

    As soon as I saw the cte, it was a hand to forehead moment. Of course a cte would be the key.

    However, in trying out our code, I noticed that there were NULL's in the solution table. In fact, the NULL appeared where there should be data. So I set all the columns in the table creation to NOT NULL. Now the Update in the cte fails with a 'Cannot insert the value NULL into column Penalty table @t1, etc. . .'

    How would the cte or aggregate functions generate a NULL when there is not a single NULL in the data? (Trying isnull or coalesce did not help)

    What is so special about the penalty that it is not being summed correctly?

    Code:
    2012	0000001809	2013-03-14 00:00:00.000	211.15	10.00	3.14
    2011	0000001809	2013-03-14 00:00:00.000	206.12	10.00	18.20
    2010	0000001809	2013-03-14 00:00:00.000	219.41	10.00	35.37
    2009	0000001809	2013-03-14 00:00:00.000	0.00	0.00	0.00
    2012	000001808X	2013-03-14 00:00:00.000	10.65	NULL	1.06
    2011	000001808X	2013-03-14 00:00:00.000	378.70	10.00	32.73
    2010	000001808X	2013-03-14 00:00:00.000	0.00	0.00	0.00
    2009	000001808X	2013-03-14 00:00:00.000	341.96	NULL	142.74

  4. #4
    Join Date
    Feb 2008
    Location
    Japan
    Posts
    3,483
    How about to add "ELSE 0 ", like...
    Code:
    ; WITH t2_summary AS (
      SELECT "Year"
           , Parcel
           , Sum(CASE WHEN TransType = 'R' THEN Payment ELSE 0 END) As Principal
           , Sum(CASE WHEN TransType = 'P' THEN Payment ELSE 0 END) As Penalty
           , Sum(CASE WHEN TransType = 'I' THEN Payment ELSE 0 END) As Interest
      FROM   @t2
      GROUP
          BY "Year"
           , Parcel
    )
    ...

  5. #5
    Join Date
    Jan 2013
    Posts
    354
    Provided Answers: 1

    The data model is a bit off and you can use MERGE

    >> I am trying to update a parent table [sic] with a summation of its child [sic] records [sic]. The child [sic] records [sic] are being deleted because the transaction has become invalid because payment was made with a bad check or there was a posting error. So a rollback of sorts is required.<<

    Your mindset is still in physical files and network databases, not RDBMS. Rows are not records; totally different concepts. The terms “parent” and “child” are also from network databases. We have referenced and referencing tables as the way to model weak and strong entities. The summation is a very weak entity and can be computed from nothing but the strong entity that gives it life; drop it.

    In English, we do not store computed columns this way; we do it in a VIEW that is always correct.

    Here is a correction on your DDL to get it up to ISO-11179 rules.

    CREATE TABLE Parcel_Payments
    (parcel_year CHAR (10) NOT NULL
    CHECK (parcel_year LIKE '[12][0-9][0-9][0-9]-00-00'),
    parcel_id CHAR(13) NOT NULL,
    interest_date DATE NOT NULL,
    PRIMARY KEY (parcel_year, parcel_id, interest_date),
    principal_amt DECIMAL(12,2) NOT NULL,
    penalty_amt DECIMAL(12,2) NOT NULL,
    interest_amt DECIMAL(12,2) NOT NULL);

    I like the MySQL convention of using double zeroes for months and years, That is 'yyyy-mm-00' for a month within a year and 'yyyy-00-00' for the whole year. The advantage is that it will sort with the ISO-8601 data format required by Standard SQL. You do no math on it, so why have an INTEGER? A table (unlike your file) has to have key, so I guessed

    INSERT INTO Parcel_Payments
    VALUES
    (2012, '0000001809', '2013-03-14', 0.00, 0.00, 0.00),
    (2011, '0000001809', '2013-03-14', 0.00, 0.00, 0.00),
    (2010, '0000001809', '2013-03-14', 0.00, 0.00, 0.00),
    (2009, '0000001809', '2013-03-14', 0.00, 0.00, 0.00),
    (2012, '000001808X', '2013-03-14', 0.00, 0.00, 0.00),
    (2011, '000001808X', '2013-03-14', 0.00, 0.00, 0.00),
    (2010, '000001808X', '2013-03-14', 0.00, 0.00, 0.00),
    (2009, '000001808X', '2013-03-14', 39.23, 3.00, 0.00);

    CREATE VIEW Parcel_Payments_Summary
    AS
    SELECT parcel_year, parcel_id,
    SUM(principal_amt) AS principal_amt_tot,
    SUM(penalty_amt_tot) AS penalty_amt_tot,
    SUM(interest_amt) AS interest_amt_tot
    FROM Parcel_Payments
    GROUP BY parcel_year, parcel_id;

    Your second table should be a Journal or Ledger that stages changes to the Parcel Payments base table.

    CREATE TABLE Parcel_Payments_Journal
    (parcel_year CHAR (10) NOT NULL
    CHECK (parcel_year LIKE '[12][0-9][0-9][0-9]-00-00'),
    parcel_id CHAR(13) NOT NULL,
    receipt_nbr CHAR(6) NOT NULL,
    PRIMARY KEY (parcel_year, parcel_id, receipt_nbr),
    trans_date DATE NOT NULL,
    trans_amt DECIMAL(12,2) NOT NULL,
    trans_type CHAR(1) NOT NULL
    CHECK (trans_type IN ('I', 'P', 'R' ))
    );

    Do not use the old UPDATE.. FROM.. syntax; it does not work! Google this.

    MERGE INTO Parcel_Payments AS P
    USING Parcel_Payments_Journal AS J
    ON P.parcel_year = J.parcel_year
    AND P.parcel_id = J.parcel_id
    WHEN MATCHED
    THEN UPDATE
    SET principal_amt
    = principal_amt
    + (CASE WHEN J.trans_type = 'R'
    THEN J.trans_amt ELSE 0.00 END),
    penalty_amt = penalty_amt
    + (CASE WHEN J.trans_type = 'P'
    THEN J.trans_amt ELSE 0.00 END),
    interest_amt = interest_amt
    + (CASE WHEN J.trans_type = 'I'
    THEN J.trans_amt ELSE 0.00 END);

    Do you want to do an INSERT if there is no match?

  6. #6
    Join Date
    Dec 2012
    Location
    Logan, Utah
    Posts
    163
    Quote Originally Posted by tonkuma View Post
    How about to add "ELSE 0 ", like...
    I actually had already tried that and it did not change the output.

    Thanks for the suggestion!

  7. #7
    Join Date
    Dec 2012
    Location
    Logan, Utah
    Posts
    163
    Quote Originally Posted by Celko View Post
    >>
    Do not use the old UPDATE.. FROM.. syntax; it does not work! Google this.

    MERGE INTO Parcel_Payments AS P
    USING Parcel_Payments_Journal AS J
    ON P.parcel_year = J.parcel_year
    AND P.parcel_id = J.parcel_id
    WHEN MATCHED
    THEN UPDATE
    SET principal_amt
    = principal_amt
    + (CASE WHEN J.trans_type = 'R'
    THEN J.trans_amt ELSE 0.00 END),
    penalty_amt = penalty_amt
    + (CASE WHEN J.trans_type = 'P'
    THEN J.trans_amt ELSE 0.00 END),
    interest_amt = interest_amt
    + (CASE WHEN J.trans_type = 'I'
    THEN J.trans_amt ELSE 0.00 END);
    Neither does your suggested merge statement. As I had stated in my OP, I have already tried to use the Merge statement and it complained about updating the same record twice, just as it did with yours.

    Do you really believe that I have tables in production named t1 and t2 without any indexes, constraints, or foreign keys? These are example tables to demonstrate a problem, which it does. Naming my variables to something else does nothing to answer the question at hand.

  8. #8
    Join Date
    Dec 2012
    Location
    Logan, Utah
    Posts
    163
    Quote Originally Posted by tonkuma View Post
    How about to add "ELSE 0 ", like...
    I don't know what is different, but I tried adding the ELSE 0 this morning and it worked; fixed the null problem on the update!

    Thank-you for your help.

  9. #9
    Join Date
    Jan 2013
    Posts
    354
    Provided Answers: 1

    Part two

    Neither does your suggested MERGE statement. As I had stated in my OP, I have already tried to use the MERGE statement and it complained about updating the same record {sic: rows are not records} twice, just as it did with yours.
    t

    That was to show you the ANSI Standards syntax. It will catch cardinality problems that UPDATE.. FROM.. will not and it will port. The problem is with the schema and your sample data. Every transaction has the same receipt number! This is like all of your dollars bills having the same serial number. Your data will not load in the Journal table. Or get past an auditor .

    Did you notice that 2009 is a bad year:

    (2009, '000001808X', 135339, '2013-03-14 15:53:20.000', 42.74, 'I'),
    (2009, '000001808X', 135339, '2013-03-14 15:53:20.000', 302.73, 'R'),
    (2009, '000001808X', 135339, '2013-03-14 15:53:20.000', 100.00, 'I') <<dup!

    You have two (trans_type = 'I') rows that differ only by the transaction amount. Right down to the microseconds! See why we need keys? I will consolidate the rows by summing them, rather than over-writing one of the values, but that is my guess, not your specs.

    I will get to ways to use the Journal shortly.

    Do you really believe that I have tables in production named t1 and t2 without any indexes, constraints, or foreign keys?
    Yes, for two reasons: (1) it is what you told us and you looked so honest (2) I have been fixing stinky bad SQL for decades and have seen worse in production. Buy me a Scotch and we will talk about it (EAV, ID-iocy, Tibbling, etc ).

    Naming my variables to something else does nothing to answer the question at hand.
    Did you notice that I had no problem getting a new problem into ISO-11179 and other standards? If I can do this on the fly for my skeleton code, why shouldn’t you be able to provide at least keys? We need them! The CHECK() stuff could be dropped for a skeleton, but we need the skull!

    Now, back to the Journal table and approaches. To recap, here is the Journal table, with those constraints you seem to hate:

    CREATE TABLE Parcel_Payments_Journal
    (parcel_year CHAR (10) NOT NULL
    CHECK (parcel_year LIKE '[12][0-9][0-9][0-9]-00-00'), *
    parcel_id CHAR(13) NOT NULL, *
    receipt_nbr CHAR(6) NOT NULL UNIQUE, -- important! Talk to accountants
    PRIMARY KEY (parcel_year, parcel_id, receipt_nbr), *
    trans_date DATE NOT NULL, *
    trans_amt DECIMAL(12,2) NOT NULL, *
    trans_type CHAR(1) NOT NULL
    CHECK (trans_type IN ('I', 'P', 'R' ))
    );*

    Insert data without dups:

    INSERT INTO Parcel_Payments_Journal
    VALUES
    ('2012-00-00', '0000001809', '135338', '2013-03-14', 3.14, 'I'),
    ('2012-00-00', '0000001809', '135339', '2013-03-14', 10.00, 'P'),
    ('2012-00-00', '0000001809', '135340', '2013-03-14', 211.15, 'R'),

    ('2012-00-00', '000001808X', '135341', '2013-03-14', 0.06, 'I'),
    ('2012-00-00', '000001808X', '135342', '2013-03-14', 9.65, 'R'),

    ('2011-00-00', '0000001809', '135343', '2013-03-14', 18.20, 'I'),
    ('2011-00-00', '0000001809', '135344', '2013-03-14', 10.00, 'P'),
    ('2011-00-00', '0000001809', '135345', '2013-03-14', 206.12, 'R'),

    ('2011-00-00', '000001808X', '135346', '2013-03-14', 32.73, 'I'),
    ('2011-00-00', '000001808X', '135347', '2013-03-14', 10.00, 'P'),
    ('2011-00-00', '000001808X', '135348', '2013-03-14', 378.70, 'R'),

    ('2010-00-00', '0000001809', '135349', '2013-03-14', 35.37, 'I'),
    ('2010-00-00', '0000001809', '135350', '2013-03-14', 10.00, 'P'),
    ('2010-00-00', '0000001809', '135351', '2013-03-14', 219.41, 'R'),

    ('2009-00-00', '000001808X', '135352', '2013-03-14', 142.74, 'I'),
    ('2009-00-00', '000001808X', '135353', '2013-03-14', 302.73, 'R');


    Neither does your suggested merge statement. As I had stated in my OP, I have already tried to use the MERGE statement and it complained about updating the same record twice, just as it did with yours.
    That was to show you the ANSI Standards syntax. It will catch cardnality problems and it will port. The probelm is with the schema. Every transaction has the same recipt number! This is like all of your dollars bills having the same serial number. Your data will not load in the Journal table. Or get past an auditor.

    Did you notice that 2009 is a bad year:

    (2009,'000001808X',135339,'2013-03-14 15:53:20.000',42.74,'I'),
    (2009,'000001808X',135339,'2013-03-14 15:53:20.000',302.73,'R'),
    (2009,'000001808X',135339,'2013-03-14 15:53:20.000',100.00,'I')

    You have two (trans_type='I') rows that differ only by the transaction amount. Right down to the microseconds! See why we need keys? I will consolidate the rows by summing them, rather than over-writing, but that is my guess, not your specs.

    I will get to ways to use the Journal shortly.

    Do you really believe that I have tables in production named t1 and t2 without any indexes, constraints, or foreign keys?
    Yes, for two reasons: (1) it is what you told use (2) I have been fixing stinky bad SQL for decades and have seen worse in production. Buy me a Scotch and we will talke about it.

    Naming my variables to something else does nothing to answer the question at hand.
    Did you notice that I had no problem getting a new problem into ISO-11179 and other standards? If I can do this on the fly for my skeleton code, why shoudln't you be able to provide at least keys? We need them! The CHECK() stuff could be dropped for a skeleton, but we need the skull !

    Now, back to the Journal and approaches.

    We can flatten the Journal into one row with three columns for the amounts within each (parcel_year, parcel_id) key, so one journal row matches one Parcel Payment row. I do not like approach. I want to keep the receipt numbers, and I assume that these changes come in batches by transaction types.

    Here is the revised MERGE that you can use for a procedure with a parameter.

    BEGIN
    DECLARE @in_trans_type CHAR(1);
    SET @in_trans_type = 'I'; -- I, P, R

    MERGE INTO Parcel_Payments AS P
    USING Parcel_Payments_Journal AS J
    ON P.parcel_year = J.parcel_year
    AND P.parcel_id = J.parcel_id
    AND trans_type = @in_trans_type
    WHEN MATCHED
    THEN UPDATE*
    SET principal_amt*
    = principal_amt*
    + (CASE WHEN J.trans_type = 'R'*
    THEN J.trans_amt ELSE 0.00 END),
    penalty_amt = penalty_amt
    + (CASE WHEN J.trans_type = 'P'*
    THEN J.trans_amt ELSE 0.00 END), *
    interest_amt = interest_amt*
    + (CASE WHEN J.trans_type = 'I'*
    THEN J.trans_amt ELSE 0.00 END);

    END;

  10. #10
    Join Date
    Nov 2004
    Posts
    1,427
    Provided Answers: 4
    Quote Originally Posted by Celko
    ...
    Do not use the old UPDATE.. FROM.. syntax; it does not work! Google this.

    MERGE INTO Parcel_Payments AS P
    USING Parcel_Payments_Journal AS J
    ....
    Hi Celko,

    What do you mean by "the old UPDATE ... From ... syntax"? Should it be replaced by a MERGE statement? Why? Can you give an URL where I can find more background explanation?
    With kind regards . . . . . SQL Server 2000/2005/2012
    Wim

    Grabel's Law: 2 is not equal to 3 -- not even for very large values of 2.
    Pat Phelan's Law: 2 very definitely CAN equal 3 -- in at least two programming languages

  11. #11
    Join Date
    Dec 2012
    Location
    Logan, Utah
    Posts
    163
    Quote Originally Posted by Celko View Post
    The problem is with the schema and your sample data. Every transaction has the same receipt number! This is like all of your dollars bills having the same serial number. Your data will not load in the Journal table. Or get past an auditor .

    Did you notice that 2009 is a bad year:

    (2009, '000001808X', 135339, '2013-03-14 15:53:20.000', 42.74, 'I'),
    (2009, '000001808X', 135339, '2013-03-14 15:53:20.000', 302.73, 'R'),
    (2009, '000001808X', 135339, '2013-03-14 15:53:20.000', 100.00, 'I') <<dup!

    You have two (trans_type = 'I') rows that differ only by the transaction amount. Right down to the microseconds! See why we need keys? I will consolidate the rows by summing them, rather than over-writing one of the values, but that is my guess, not your specs.

    The module that this query will be used for is from a list of payments. For whatever reason, people will pay with a bad check or overpay or the user just makes an makes some sort of error. Regardless, each of these situations requires that the payments be reversed.

    The user can select 1 to n receipt numbers. The process of selecting one account will automatically select all the rows associated with that one receipt number for that one account. Theoretically, they could select all the records in the table. But that will never happen in real life. At most, they will select 3 or 4 accounts with each account having 1 to 3 transactions.

    These transactions are copied to a temporary staging file that I called t2 in this post. Choosing only 1 account or even 2 accounts, they would have the same receipt number. (Paid off 2 accounts with one check). I added the last transaction for 2009 that you flagged as dup on purpose so that I could verify that any solution would do the summation correctly into the proper categories.

    I provided exactly the data for what the original question required without all this additional explanation because it just wasn't needed.

    For such a small temporary table, less than 50 records, would an index be of much use?

  12. #12
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    BOL for the Microsoft implementation of the MERGE statement shows how to use MERGE.

    MERGE doesn't play well with triggers, and how it gets wonky depends on too many things for that to be a simple discussion... Suffice it to say that depending on a lot of factors the trigger fire can be "interesting" to figure out!

    MERGE can also as a side effect cause @@rowcount to be way too high. The count is actually correct based on what happens, but it can cause @@rowcount to be significantly higher than the actual number of rows updated in the destination table or tables.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  13. #13
    Join Date
    Jan 2013
    Posts
    354
    Provided Answers: 1

    Here is an example of UPDATE.. FROM..

    What do you mean by "the old UPDATE ... From ... syntax"? Should it be replaced by a MERGE statement? Why?
    The old 1970's Sybase syntax is unpredictable, not portable and conceptually wrong.

    CREATE TABLE Orders
    (order_nbr INTEGER NOT NULL PRIMARY KEY,
    some_col DECIMAL (9,2) NOT NULL);

    INSERT INTO Orders
    VALUES (1, 0), (2, 0), (3, 0);

    CREATE TABLE Order_Details
    (order_nbr INTEGER NOT NULL,
    sku INTEGER NOT NULL,
    item_price DECIMAL (9,2) NOT NULL,
    PRIMARY KEY(order_nbr, sku),
    -- FOREIGN KEY(sku) REFERENCES Products(sku)
    FOREIGN KEY(order_nbr) REFERENCES Orders(order_nbr));

    INSERT INTO Order_Details VALUES (1, 1, 500.00);
    INSERT INTO Order_Details VALUES (1, 2, 205.00);
    INSERT INTO Order_Details VALUES (2, 1, 490.95);
    INSERT INTO Order_Details VALUES (3, 1, 480.00);

    SELECT * FROM Orders;

    UPDATE Orders
    SET Orders.some_col =Order_Details.item_price
    FROM Orders
    INNER JOIN
    Order_Details
    ON Orders.order_nbr =Order_Details.order_nbr;

    /* results -- see item #1; last physical value
    1 205.00 - where is the $500.00?
    2 490.95
    3 480.00
    */

    --repeat with new physical ordering
    DELETE FROM Order_Details;
    DELETE FROM Orders;
    DROP INDEX Order_Details.foobar;

    -- index will change the execution plan
    CREATE INDEX foobar ON Order_Details (order_nbr, item_price);

    INSERT INTO Orders VALUES (1, 0);
    INSERT INTO Orders VALUES (2, 0);
    INSERT INTO Orders VALUES (3, 0);

    INSERT INTO Order_Details VALUES (1, 2, 205.00);
    INSERT INTO Order_Details VALUES (1, 1, 500.00);
    INSERT INTO Order_Details VALUES (2, 1, 490.95);
    INSERT INTO Order_Details VALUES (3, 1, 480.00);

    UPDATE Orders
    SET Orders.some_col = Order_Details.item_price
    FROM Orders
    INNER JOIN
    Order_Details
    ON Orders.order_nbr = Order_Details.order_nbr;

    SELECT * FROM Orders;

    /*
    Results
    1 500.00
    2 490.95
    3 480.00
    */

    What is the first property that you must have in an INDEX? It cannot change the results of a statement, only the performance. See the problem?

    We have the ANSI/ISO Standard MERGE statement which will catch this and other problems. Use it.

  14. #14
    Join Date
    Feb 2004
    Location
    In front of the computer
    Posts
    15,579
    Provided Answers: 54
    Quote Originally Posted by Celko View Post
    We have the ANSI/ISO Standard MERGE statement which will catch this and other problems. Use it.
    ...in SQL 2008 and later versions. For code running on SQL 2005 and earlier, writing procedural code to simulate MERGE or using the UPDATE ... FROM syntax is ugly but it is all that you can do.

    -PatP
    In theory, theory and practice are identical. In practice, theory and practice are unrelated.

  15. #15
    Join Date
    Nov 2004
    Posts
    1,427
    Provided Answers: 4
    I rewrote the UPDATE statement as a MERGE statement
    Code:
    MERGE Orders AS TARGET
    USING Order_Details AS SOURCE 
    ON (TARGET.order_nbr = SOURCE.order_nbr) 
    --When records are matched, update 
    WHEN MATCHED THEN 
    UPDATE SET TARGET.some_col = SOURCE.item_price
    OUTPUT $action, 
    DELETED.order_nbr AS Target_order_nbr, 
    DELETED.some_col AS Target_some_col, 
    INSERTED.order_nbr AS Source_order_nbr, 
    INSERTED.some_col AS Source_some_col;
    SELECT @@ROWCOUNT;
    This is the error message:
    Code:
    Msg 8672, Level 16, State 1, Line 1
    The MERGE statement attempted to UPDATE or DELETE the same row more than once. 
    This happens when a target row matches more than one source row. 
    A MERGE statement cannot UPDATE/DELETE the same row of the target table 
    multiple times. 
    Refine the ON clause to ensure a target row matches at most one source row, 
    or use the GROUP BY clause to group the source rows.
    An error instead of silently doing something non-deterministic. Great. I'm convinced.

    I have a DB2 background. It did not support the UPDATE ... FROM syntax. We had to write such an UPDATE as
    Code:
    UPDATE Orders
    SET Orders.some_col = (SELECT Order_Details.item_price
    			FROM Order_Details
    			WHERE Orders.order_nbr = Order_Details.order_nbr
    			)
    Talking about ugly. But it also gave an error when the subquery returned multiple records. (I don't know if current versions of DB2 support the UPDATE ... FROM syntax.)

    And I was so happy SQL Server allowed it...

    Celko, Pat, thank you for the explanation.
    With kind regards . . . . . SQL Server 2000/2005/2012
    Wim

    Grabel's Law: 2 is not equal to 3 -- not even for very large values of 2.
    Pat Phelan's Law: 2 very definitely CAN equal 3 -- in at least two programming languages

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •