Results 1 to 3 of 3
  1. #1
    Join Date
    Jan 2009

    Application design, looking for advice

    Hello, just joined this forum and i am looking for some advice on an application design.

    I am in the process of developing a database that stores pass and failure rates for building a product and i need advice on the best way to handle the data.

    Just to give you all some background on this, i am using Oracle 10g as my RDMS and Coldfusion as the application development languguage.

    Ok now i have that part defined i will go into detail what i have and what i am trying to achieve.

    The application is a data collection application that shop floor workers will use to collect each lines First Time Build Rate.

    Each line has x amount of processes:

    Flow Valve 1
    Flow Valve 2
    Fit Cover

    Each process is then broken down into Task Groups:

    Task Group 1
    Task Group 2

    Each Task Group has a set of Tasks

    Fit flow valve into unit.
    Seal with sealant
    Fit rubber seal
    Fit enclosure

    Its at this level that the operators record the amount of failures against each task.

    so the following would be recorded:
    Task Group 1
    Fit flow valve into unit. - Failed= 10
    Seal with sealant - Failed = 2
    Fit rubber seal - Failed = 36
    Fit enclosure - Failed = 3

    Task Group 2
    Fit flow valve into unit. - Failed= 15
    Seal with sealant - Failed = 200
    Fit rubber seal - Failed = 53
    Fit enclosure - Failed = 9

    Also the pass rate would be recorded against the Task Group on the same input for as the failures are being recorded:

    So for example this would happen:
    Task Group 1
    Fit flow valve into unit. - Failed= 10
    Seal with sealant - Failed = 2
    Fit rubber seal - Failed = 36
    Fit enclosure - Failed = 3

    Passed: 200

    Task Group 2
    Fit flow valve into unit. - Failed= 15
    Seal with sealant - Failed = 200
    Fit rubber seal - Failed = 53
    Fit enclosure - Failed = 9

    Passed: 623

    The way i have the DB setup right now is all of the failures are stored in Failure Table. All of the pass data is stored in a Passed Table.

    What happens then is i need to sum up all of the failures, by Task Group and also sum up all of the passed qtys by Task Group and multiply the summed failures by summed passed rate. I do this by linking the two tables together using the Task Group UUID which gets stored in the Passed table and obviously the Failures table.
    I need to produce a complete Years results from this by task group so a date range against the Input Date is applied to the Failures table, the input date is stored in the failures table.

    The output from this query is then analysed by graphing.

    My question is: is this the best way to handle this or am i completly of the tracks here. I am not asking for anyone to come up with a complete solution, i just need some guidance or advice.

    Appologies if this is long winded but i needed to explain how the current design is looking like right now.

    Many thanks


  2. #2
    Join Date
    Jun 2004
    Arizona, USA
    Maybe, since you appear to have a lot of common descriptions between tasks, taskgroups, and processes...
    Lines: LineID*, LineDescr, any other line specific field(s)
    ProcessesInLines: LineID*(fk), ProcessID*(fk), ProcessOrder
    Processes: ProcessID*, ProcessDescr, other process specific field(s)
    TaskGroupsInProcesses: ProcessID*(fk), TaskGroupID*(fk), TaskGroupOrder
    TaskGroups:TaskGroupID*, TaskGroupDescr, other task group specific field(s)
    TasksInTaskGroups:TaskGroupID*(fk), TaskID*(fk), TaskOrder
    Tasks: TaskID, TaskDescr, other task specific field(s)
    Results:LineID*(fk), ProcessID*(fk), TaskGroupID*(fk), TaskID*(fk), PassFail, PassFailQty
    example results data:
    2,4,2,3,'F',10 (10 units failed Line 2, Process 4, Task Group 2, Task 3)
    2,4,2,3,'P',321 (321 units passed Line 2, Process 4, Task Group 2, Task 3)

    If you add a surrogate ID field to each of the intersection tables (XinY), this table could become:
    Results: ProcessesInLineID*(fk), TaskGroupInProcessID*(fk), TaskInTaskGroupID*(fk), PassFail, PassFailQty
    The approaches above would force you to have a record status with qty for each task, but, you can more easily use aggregate functions against the single table. And, you can easily sum pass and/or fail records at any level you need.

    Since the records in all the tables except the results table will change rarely, additional surrogate keys and indexes should help, not hurt performance.

    * indicates primary key field, (fk) indicates foreign key relationship
    Last edited by loquin; 01-14-09 at 13:19.
    "Lisa, in this house, we obey the laws of thermodynamics!" - Homer Simpson
    "I have my standards. They may be low, but I have them!" - Bette Middler
    "It's a book about a Spanish guy named Manual. You should read it." - Dilbert

  3. #3
    Join Date
    Jan 2009
    Hello loquin,

    First of all i want to thank you for your time looking at this.

    I am currently reading this from home so i am going to review your approach to see how it fits into the model tomorrow when i am in work.

    What you said about summing up on one table would make it easier.
    Just thinking out loud now on the scenario:

    I need to be able to sum up all of the Failed qtys against a Taskgroup grouped by ProcessID and sum up all of the passed qtys against a Taskgroup and then do a math calculation on both totals and then output by Taskgroup across a date range.

    So the final output would be something like: (i need to double check the math tomorrow)

    avg(sumOfFailedQtys * sumOfPassedQtys)
    by Taskgroup over a given date range and sort order.

    This data is then used to dynamically create graphs which have drilldown capabiliites.

    Just to put something else into the mix on one line there are 3 test stations(taskgroups) each have say 10 tasks. The results of each of the 10 tasks x 3 tasksgroups are summed up and averaged out and then added into the final results. I have no idea how to approach that.

    Anyway i look forward to tomorrow to see how your approach fits into the model.

    Again many thanks

    Last edited by jedimatt; 01-16-09 at 06:47.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts