I have a fairly simple data model, there is one main transaction table along with other incidental info tables. My main question is, as the transaction table is eventually going to have thousands of rows how will this affect perfomance and how do I counter that? The select/insert ratio would be 60/40 on this table.
I know that I can index the primary key to improve select performance but from what I have read that decreases insert/delete performance drastically.
Is there any other way to design this table? Are there some basic rules to design a transaction table?
Thousands of rows is peanuts, millions of rows is OK, 100s of millions of rows is big. You will certainly want the primary key indexed (can't speak for PostgreSQL, but in Oracle it is unavoidably indexed) - otherwise every SELECT for a single row will be VERY slow. You probably need some other indexes too, on foreign key columns for example. There is a small cost on inserts/delete per index, but it is FAR OUTWEIGHED by the cost of selects where a required index is missing!
To say more, one would need to see your proposed tables, indexes and volumetrics.