Results 1 to 14 of 14
  1. #1
    Join Date
    Jul 2003
    Location
    SoCal
    Posts
    721

    Unanswered: Serial Trigger Executions

    I'm having an issue with Triggers which fire serially within a transaction. The queries that execute the triggers are from an ASP page (therefore, I originaly posted my query there - alas, no response).

    The first query executes and fires a trigger, then the second query executes which also fires a trigger (although - this second query does not necessarily have to fire a trigger for the error to occur). When this happens, I receive an error message (which can be found in the link below to the other forum post). It hinges around the firing of a trigger followed by the execution of another query. It seems as though the trigger is still executing when the second query goes off, causing them to collide. I'm unsure why this occurs though (I thought triggers executed independently of the original query - and therefore, it's associated recordset object in ASP).

    Does anyone have any ideas?

    Here's the link to the original post in the ASP forum (much much more detailed then this one)

    http://65.61.175.198/t890561.html

    Thanks!

  2. #2
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    see my post on ASP.

  3. #3
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Seppuku,

    If you are just using triggers to cascade column updates, consider defining foreign keys and cascading referential integrity restraints instead. This method of cascading updates is preferred over triggers for maintaining referential integrity.

    Also, I'd be willing to bet that a few changes to your table design could eliminate a lot of the coding problems you are having. If you have control over table layouts, post them, along with their triggers, and we might be able to suggest ways to tweak them.

    blindman

  4. #4
    Join Date
    Jul 2003
    Location
    SoCal
    Posts
    721
    Originally posted by blindman
    Also, I'd be willing to bet that a few changes to your table design could eliminate a lot of the coding problems you are having. If you have control over table layouts, post them, along with their triggers, and we might be able to suggest ways to tweak them.
    Well I'm almost certain that's true. I'm not a DBA, so I'm forced to do the data structure myself.

    Let me break this out a bit more (this doesn't necessarily follow my example above). The implementation of the products are "Effective Dated" (EffDate). This means that the products have a certain date/time that this product is available. If I change the EffDate (aka DB UPDATE), I have a trigger that intercepts this update, and inserts a new row instead with the new EffDate. This versions the product, creating a products table with potentially many versions of the same product, only one of which is effective at one time.

    This is simple enough. The problem lies in tables associated with the product. For one example, there is a table that defines rules that an individual must pass before they are eligible for this product. That product rules table has a translation table of values associated with the rule. So there could be dozens of rules (and versions of rules), and dozens of products (and versions of products), all with different values (and versions of values) that govern the product. If a product is versioned, all of it's associated records need to be duplicated as well (otherwise true versioning is lost). To accomplish this, my UPDATE trigger not only creates a new version of the product, but updates associated tables.

    To me it seemed better to cascade the changes. Instead of my single UPDATE trigger duplicating records directly in the rules translation table, it made more sense to have the UPDATE table update the product rules table, which would cascade the change down to the translation table.

    So at the most basic level, yes, I'm doing a column update, but it's a column update to a record that has to be duplicated before the change is made to provide a history of changes to a product (which includes all tables that govern it's actions).

    Now you may see how this poses a problem for foreign keys.

    As for optimization.... you may be entirely right. I might not be doing this the most efficient way as possible, but I'm trying to strike a happy medium between levels of normalization... Right now I'm looking at about 16 tables, so I'm not sure how to represent the schema here.
    Last edited by Seppuku; 08-25-03 at 12:46.

  5. #5
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    You should definitely consider using sp's vs. triggers. I fully understand your "versioning" scenario and have it implemented myself into an HMO business system, where plans are offered to members with specific configuration that is time-stamped. In addition I also control the ability to create a new plan effectiveness record by setting an effectiveness period for the plan (datestart, dateend). This way no new period can be started untill an existing one is expired or overridden. The difference is that I implemented ProductMaster that has only minimal amount of information that will never change once the plan (product in your case) is created. All changeable attributes are dependent on the product_effectiveness table. Plan subordinate attributes (rules in your case) are structured the same way, with rule_master and, let's say, rule_detail. I was also considering using cascading triggers, but soon discovered that maintenance and debugging will be a nightmare. So I reverted to sp's and it worked out very well as far as performance and data integrity go. Of course, time invested into operation success/failure validation can only be fully appreciated once your app is in production

  6. #6
    Join Date
    Jul 2003
    Location
    SoCal
    Posts
    721
    Very true.. this is healthcare related as well, but not healthcare plans. This is an associate self-service type of app.

    So in your case, you created a stored procedure that mimics a cascading trigger (except that it doesn't fire autmatically - you have to envoke it). You have SPs that do one set of updates, and you cascade those changes through other SP executes?

    It's easy to do.. I'm not going to argue that.. I know exactly what would have to change. I'm curious if it would work as maybe a blend of the two? What if I executed an update which fired a trigger which called a SP to handle all the updates? Overkill? My previous background as a sys admin screams "Don't leave the table unprotected!"

  7. #7
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    I don't think triggers are appropriate for this problem. You need to create a stored procedures like "sp_AddNewVersion", or something to that effect.

    Then, you have to make sure you lock down your database so as not to allow any direct access to the data tables. All changes should be made through stored procedures, and your application should only have permission to access those procedures.

    I am curious to how you defined your primary keys. Are you cascading any changes to primary key columns? (I don't mean inserting new records.) If this causes confusion, consider using a primary key that is an index or guid (my preference), which would remain the same throughout the life of the record - no cascading updates necessary!

    blindman

  8. #8
    Join Date
    Jul 2003
    Location
    SoCal
    Posts
    721
    Originally posted by blindman
    I am curious to how you defined your primary keys. Are you cascading any changes to primary key columns? (I don't mean inserting new records.) If this causes confusion, consider using a primary key that is an index or guid (my preference), which would remain the same throughout the life of the record - no cascading updates necessary!
    Each row, no matter the version, has a unique ID. This is the primary key. When a product is updated, I use the unique ID to reference it. The secondary support tables are linked (not by a key relationship) via a "ProdCode" (Product Code) which all versions of the same product share. When I need to update a value in the secondary table I join on the ProdCode columns and the EffDate columns to get the unique IDs of the rows in the secondary tables related to a specific version. Then I can directly update that row of the table using it's UID.

    A schema example would look like this (this is a VERY basic example):

    tblProducts:
    iProdID (int) (key) (UID)
    cProdCode (char)
    sProdName (nvarchar)
    dtEffDate (smalldatetime)

    tblProductRulesList:
    iProdRuleID (int) (key) (UID)
    cProdCode (char)
    iRuleID (int) (foreign key to tblRules:iRuleID)
    dtEffDate (smalldatetime)

    I didn't show "tblRules" because it's just a definition of a rule and a pointer to a function to execute that rule with some values. In these basic schema examples, I could have multiple Products and multiple Rules linked to the product through the "tblProductRulesList" table. To make sure I get the right version of the RulesList for a Product, I JOIN the two tables together on cProdCode and dtEffDate which yields the data. Then I can use iProdRuleID to directly edit the Rule List for a given version of a Product.

    But this is all assuming that dtEffDate hasn't changed. If, in the update to the Product, the user changes the Effective Date, then I need to generate new versions of everything first. If the Effective Date has not changed, then all I need to do is update the associated rows.

    It's early.. I hope that made sense.. I'm off dig up some caffeine..

    Thanks again blindman for taking some time!
    Last edited by Seppuku; 08-26-03 at 11:58.

  9. #9
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    I suspected you might have your tables set up like this.

    I understand the advantage of using a natural key as the clustered index and for linking sub-tables; you get the convenience of seeing that valuable information any time you view the subtable, without having to link to the parent tables. The price you pay, though, is added overhead in maintaining referential integrity.

    Unless I misunderstand your application (not unlikely), you could link your tables based on the UID and never have to cascade updates (just deletes). If nothing else, you are violating the basic rules of normalization by storing cProdCode and dtEffDate in two different tables, and across multiple records in your subtable. Increasing the volume and complexity of your code is an almost inevitable result. Couldn't you set up your tables like this:

    tblProducts:
    iProdID (int) (key) (UID)
    cProdCode (char)
    sProdName (nvarchar)
    dtEffDate (smalldatetime)

    tblProductRulesList:
    iProdRuleID (int) (key) (UID)
    iProdID (int) (UID)
    iRuleID (int)

    Tell me what I'm missing here in your requirements?

    blindman

  10. #10
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    The issue here is not in how many fields are used to uniquely identify a row, but in the fact that volatile and non-volatile attributes are stored in the same table that is used as a master table. If you move your date into a different table with iProdID as an FK referencing tblProducts, then you can link your tblProductRulesList to it either via iProdID and date field, or via an additional field that you would create in that ProductDetail table, an identity field for example.

    But the real question here is (taken from Seppuku's original post):

    'Start Processing ASP'

    Form values are submit

    Begin Transaction

    These values require an update to the ProductProperties before we update the Product. This update triggers an update to the ProductValues

    The Product is now updated which fires a trigger to duplicate the product's associated records in the subordinate table ProductProperties, which then fires another trigger to duplicate the products records in the tertiary table PropertyValues.

    Close Transaction

    'End Processing ASP'

    As you can see, this process path fires three triggers. One trigger in the first execution, and two triggers in the second execution.

    At the execution of that second statement, I am confronted with the following ASP error...

    Seppuku needs an explanation of behavior, not a lecture on superiority of bm's database design methodology!

    P.S.: bm - follow your own recommendations.

  11. #11
    Join Date
    Jul 2003
    Location
    San Antonio, TX
    Posts
    3,662
    in addition, cascading RI that was referred to by bm as a preferred way earlier has its own issues in the current editions of SQL server:

    BUG: A Delete Operation That Involves a Cascade Delete Updates the Statistics Incorrectly

  12. #12
    Join Date
    Jul 2003
    Location
    SoCal
    Posts
    721
    The funny thing is, I originally used iProdID as the UID but I ran into problems when I decided I needed versioning (which was before my decision to use triggers).

    I'm in agreement that having the ProdCode and EffDate is duplicate data. I found that (at the time) when I used a UID in records in a child table to reference a record in a parent table, duplicating a version (and it's children) wouldn't work because I didn't have the UID of the new product version before it's chilren were created.

    I suppose that with triggers or SP, I could use the @@IDENTITY property when creating that new version. So the process would be that, programmically, the decision is made to create a new version of the product. This new version is created, the UID stored in a variable from @@IDENTITY, then the children created using the new UID as it's reference. Then the EffDate and ProdCode do not need to be stored in the children table, but it lends me to wonder if I'm going to find it difficult to retrieve the products easily.

    Secondly, I think I'm confused how you can cascade a change to a child table from the parent table. If my child table stores values related to a parent table, how does that value get updated through a cascade without me touching the table directly? Does it follow the key relationships down to the children, then match columns in my update to those in children table, and update those columns (so it looks like an update to the parent table, except the columns exist in the children)? I really wish I had a more solid background in databases!

    None of the data is really volitile though. Once the product is created, the values would rarely change. If that were the case, a new version would be created with the new values which would also be relatively static.

    I think I need to find a book in DB Design Best Practices

    Thanks to the both of you.. I'll get my hands around this eventually

  13. #13
    Join Date
    Jul 2003
    Location
    SoCal
    Posts
    721
    Originally posted by rdjabarov
    Seppuku needs an explanation of behavior, not a lecture on superiority of bm's database design methodology!
    Well, I could very well be wrong in how I'm approaching this problem.. if the behavior is due to my own design failures, then I'll happily welcome anything that will make it better.

  14. #14
    Join Date
    Jun 2003
    Location
    Ohio
    Posts
    12,592
    Provided Answers: 1
    Seppuku,

    If you would like some help redesigning your database, send me a private message and I'd be happy to assist you.

    blindman

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •