Results 1 to 11 of 11
  1. #1
    Join Date
    Jan 2004
    Posts
    28

    Unanswered: Denormalising 2 tables

    When denormalising, how do you choose which tables need this? I know how to denormalise but not actually what to look for.

    Thanks
    Arnielover

  2. #2
    Join Date
    Jan 2004
    Location
    Venezuela
    Posts
    151

    Re: Denormalising 2 tables

    Originally posted by Arnielover
    When denormalising, how do you choose which tables need this? I know how to denormalise but not actually what to look for.

    Thanks
    Arnielover
    There are several reasons to denormalise tables.

    1.- Heavy JOINS ( It is very used in DataWarehouses )
    2.- and in General when transactions between some tables can have better performance denormalising and it is needed.

    Note : To denormalise a lot is not good for the structure of tables and its relations in an application.
    Joel Pérez

  3. #3
    Join Date
    Jan 2004
    Posts
    28
    right, so what would you say is a performance critical transaction?

  4. #4
    Join Date
    Jan 2004
    Location
    Venezuela
    Posts
    151
    Originally posted by Arnielover
    right, so what would you say is a performance critical transaction?
    Yes, this is applied in critical transactions that need the best performance possible.
    Joel Pérez

  5. #5
    Join Date
    Oct 2003
    Posts
    706

    Wink

    Honestly now, Arnielover... how can anyone possibly answer that question, for this business, better than you?

    Look for pain. Look where it hurts. Look for situations where the business operations of the company are being negatively impacted and "buying more iron to throw at it" won't solve the problem. Look for: "we've simply got no choice here; we have to amputate."

    As you select candidates for de-normalization (or any other type of non-trivial change), it is imperative that you be able to list each and every application or report or query or "anything else" that would be impacted by the proposed change.
    ChimneySweep(R): fast, automatic
    table repair at a click of the
    mouse! http://www.sundialservices.com

  6. #6
    Join Date
    Nov 2002
    Location
    Desk, slightly south of keyboard
    Posts
    697
    Hi Chaps,

    De-normalising is the evil of databases. Look for other ways to improve performance - their are invariably many.

    Extreme normalisation exists for datamodels, no normalisation exists for %$*!"£* programmers. Somewhere in the middle is a good compromise for our production databases.

    IMHO de-normalising just defers the problem another year or two, at which point the other options for performance improvements are heavily constrained by the de-normalised schema. Again I would urge you to consider other approaches.

    Hth
    Bill
    Please don't email me directly with questions. I've probably just got home from the pub and cannot guarantee the sanity of my answers. In fact, I can't believe I actually made it home.

  7. #7
    Join Date
    Nov 2003
    Location
    down on the cube farm, left then right then another left
    Posts
    467
    RE: %$*!"£* programmers

    Hey, thats me up there!! :-) (inside joke)

    Except I realize there is normal and nonnormal and that both show an obsessive trait. Like the man said, normalness with a few quirks is what we should strive for (paraphrased).
    NOTE: Please disregard the label "Senior Member".

  8. #8
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    Originally posted by billm
    IMHO de-normalising just defers the problem another year or two, at which point the other options for performance improvements are heavily constrained by the de-normalised schema. Again I would urge you to consider other approaches.
    So true, so true, so true.

    At a company I recently worked for one of the so-called "senior designers" was proposing to denormalise by copying the customer name into every order_line* record, because the performance of queries was very bad. This denormalisation proposal had been approved by management and was about to go ahead.

    However, the queries in question were against views, in which the customer name was being obtained by calling a (poorly written/tuned) PL/SQL function rather than doing a simple join. Once the views were sorted out, there was no longer a performance issue to resolve!

    * It wasn't really order_line, but it was something analogous to that.

  9. #9
    Join Date
    Sep 2003
    Posts
    156
    I have subsequently lost the meaning of the word 'normal'

    Look for pain. Look where it hurts. Look for situations where the business operations of the company are being negatively impacted and "buying more iron to throw at it" won't solve the problem. Look for: "we've simply got no choice here; we have to amputate."
    Friday afternoon, 16:00 - the clock is ticking... tick... tock...

    Amputate my head... please, someone
    rgs,

    Ghostman

  10. #10
    Join Date
    Apr 2002
    Location
    Toronto, Canada
    Posts
    20,002
    tony, that customer name example actually makes sense, as long as there's no view or wacky function involved

    a better example is customer address placed into the order

    the customer may later change her address but you want to keep track of the address that the order was shipped to



    one of the worst things you can do is use denormalization to solve a performance "problem" without understanding the logical model

    certus, where are you

    i agree with billm, and often it does not even take "another year or two"
    rudy.ca | @rudydotca
    Buy my SitePoint book: Simply SQL

  11. #11
    Join Date
    Sep 2002
    Location
    UK
    Posts
    5,171
    Provided Answers: 1
    tony, that customer name example actually makes sense, as long as there's no view or wacky function involved
    Yes, it could make sense if that was the rationale for doing it. It wasn't the rationale in this case; the thinking was precisely: "this query runs slow - let's eliminate a join by denormalisation".

    Another brainwave someone had there was: "The process of getting an order into the system takes several minutes longer than it should; let's remove all the check constraints from the tables since the input forms have validation already". Right, shaving those milliseconds off the insert statements should really speed that process up! (Of course, the real performance problems as ever were in the time taken to retrieve data e.g. in "lists of values").

    Luckily for the world (and my sanity) the project got cancelled before it went into production.

    certus, where are you
    My guess is Certus doesn't hang out in the Oracle forum.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •