Results 1 to 2 of 2
  1. #1
    Join Date
    Feb 2003

    Transaction Table design

    I have a fairly simple data model, there is one main transaction table along with other incidental info tables. My main question is, as the transaction table is eventually going to have thousands of rows how will this affect perfomance and how do I counter that? The select/insert ratio would be 60/40 on this table.

    I know that I can index the primary key to improve select performance but from what I have read that decreases insert/delete performance drastically.

    Is there any other way to design this table? Are there some basic rules to design a transaction table?

    I am using PostgreSQL on Linux.


  2. #2
    Join Date
    Sep 2002

    Re: Transaction Table design

    Thousands of rows is peanuts, millions of rows is OK, 100s of millions of rows is big. You will certainly want the primary key indexed (can't speak for PostgreSQL, but in Oracle it is unavoidably indexed) - otherwise every SELECT for a single row will be VERY slow. You probably need some other indexes too, on foreign key columns for example. There is a small cost on inserts/delete per index, but it is FAR OUTWEIGHED by the cost of selects where a required index is missing!

    To say more, one would need to see your proposed tables, indexes and volumetrics.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts