Results 1 to 3 of 3
  1. #1
    Join Date
    Oct 2009

    real client's project..need help optimizing design please!!

    Right, I poster earlier with a practice table I was working on. I appreciated the reply and have been working on that.. but yesterday I got a real client's project.

    I have been given the task of optimizing the database design.
    The database keeps track of the transactions the company makes. A transaction is made and records are stored in several different tables.

    These are the tables:

    Opening Balance
    Transaction Type

    The client would like

    1) the data to be stored in ONE location and not 2 different places

    2) to save disk space

    3) prevent duplicate entries being

    Any ideaS? I wont post the current design of the database unless necessary as I would like to try and do it all by myself but some pointers would be appreciated.

    Initially I was thinking of denormalization. Eradicating the multiple tables and placing all data in one table. But this might not be the best strategy. How would you do it? What practices would you use?

    ANY help would be appreciated.


  2. #2
    Join Date
    Jun 2003
    Quote Originally Posted by badmanmc
    1) the data to be stored in ONE location and not 2 different places
    Is it not already in one, single, database?

    Quote Originally Posted by badmanmc
    2) to save disk space
    Is saving disk space your primary business goal? If so, find a job with another company because disk space is cheap and being chincy on it inevitably leads to disaster.

    Quote Originally Posted by badmanmc
    3) prevent duplicate entries being
    This is what database constraints and normalization are for. Yet seem to be leaning towards denormalization.

    The only tables that I can see in your design which are candidates for consolidation are USER and OPENINGBALANCE, and then only if each user has a single account. Really, you should have another table USERACCOUNT, which stores opening balances.
    If it's not practically useful, then it's practically useless.

    blindman "sqlblindman"

  3. #3
    Join Date
    Oct 2002
    Baghdad, Iraq
    I have been given the task of optimizing the database design.
    I think the problem people have when they ask questions like this is that there's a perception that most designs are significantly non-optimal.

    Certainly, it's possible to make a non-optimal design. You can simply duplicate an attribute, or have completely duplicate rows. But people have a intrinsic ability to simplify information; it comes naturally from your ability to communicate. (Even when people are redundant when they express themselves there's a reason. For example, a politician filibusters to prevent someone else from speaking or to avoid answering a question.)

    But if you have an actual, working design, odds are you've already cut out everything you don't need. Getting to an actual, working design is, of course, a significant undertaking in itself, as is determining what you actually need. That undertaking, especially if you have any kinds of deadlines, naturally weeds out all the obviously unnecessary stuff. So once you're there, you've probably removed as much complexity as you can without sacrificing completeness. That is, you could make it simpler, but only by making it do less.

    A reasonably strict application of the rules of normalization or of denormalization do not add or remove complexity to the system as a whole. What those processes do is shift complexity between the schema and the data / application. As with most things in engineering, there's a trade-off. And as with most tradeoffs, if you don't fully understand what's involved in the trade-off you'll be inclined to think one direction is almost certainly better than the other.

    For instance, if you haven't written any application logic and you're only looking at the schema, you'd be inclined to believe that you're making the system much simpler by eliminating tables. What you're not seeing, though, is the number of lines of code you write to handle stuff that could have been managed by the DBMS. For instance, if you had a simple foreign key constraint, one line of SQL code, and you eliminated it to denormalize, you could easily have added dozens of lines of application logic to do the same.

    (It's not clear from your post whether or not there's a working system already or you're working on something that's still on the drawing board. If you don't have a working system, focus on completeness and correctness. "Premature optimization is the root of all evil.")

    So I'm assuming that you or your predecessor is a sane individual who didn't build in any pointless redundancy, and through the normal process of design made all the obvious optimizations. Once the obvious optimizations are made, the job gets much, much harder. If that is true, you're probably at the point where making changes could easily make the system perform worse, what economists call "diminishing marginal returns."

    What you need to do, at this point, is to profile the system to find bottlenecks. Trace all your SQL to see which queries pull up the most time. I'm often surprised that some obscure query is killing performance. Then, systematically, work out the problem and fix it, then test to make sure you didn't break anything. And it makes sense that as you go on, optimizing becomes a more expensive and complex undertaking. The analogy is that one person can pick the low-hanging fruit by hand, once that's done you need either a team of people with ladders or a big expensive cherry picker.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts