We've run into what seems like a simple issue and may have backed ourselves into a corner, so I'm seeking suggestions on how to adapt.
BACKGROUND: There are multiple remote sites collecting data into their respective PostgreSQL table onsite. To facilitate viewing that data in our application and to ensure reliable data delivery (VPNs are not always up), I'm using Londiste3 to replicate each site table to the master database. Replication is one-way.
PROBLEM: After implementing Londiste, we changed one of our main tables to a VIEW to avoid re-coding the application. As a result of this, the application obviously can not use the indexes it "knows" because each individual replicated table had the indexes renamed.
Summary of Architecture
SITE 1 ----->---\
SITE 2 ------>----\
SITE 3 ------>-----/
Is there a reason that you don't want to push the individual records into a single table, with an additional Location column to specify where the data came from?
Adding an additional table every time you add a location seems both inefficient, and contrary to data normalization guidelines. If there's a reason to partition your data into separate tables, do so in a fashion that makes sense for your data usage. If most of the data usage involves accessing the most recent data more often than older records, for instance, partition the data into separate tables by date.
"Lisa, in this house, we obey the laws of thermodynamics!" - Homer Simpson
"I have my standards. They may be low, but I have them!" - Bette Middler
"It's a book about a Spanish guy named Manual. You should read it." - Dilbert