I'm working on quite a complex project and would appreciate any feedback or guidance on achieving the most efficient database setup.
I'm developing a hosted visitor tracking and analysis service for affiliate marketers. The service will allow affiliate marketers to view and analyse their visitor metrics and behaviour in great detail, track their marketing campaigns and analyse overall campaign performance.
Each user will require access to 10-20 tables unique to their own user ID. Within these tables, I'll be storing the information relevant to their account. This information will be fairly extensive and I'd expect some tables to contain over 100,000 rows of visitor data.
I'm unsure of the best way to store the data for efficiency. The two scenarios I have envisaged are:
1 - Create a new database for each client that contains the 10-20 tables needed to store and analyse their data.
2 - Add 10-20 new tables to a 'main' database, each appended with the client's unique user id (users_123456, clicks_123456, campaigns_123456 etc)
Lets say for the sake of argument I have 1,000 clients using the service.
This would mean in scenario 1:
20 tables per database (20,000 total)
100,000-200,000 rows of data per database (100-200 million total)
100-200 million rows of data
I'd greatly appreciate any feedback or thoughts on the best practice for this situation. Which scenario would be the most efficient in terms of running queries and allowing the users to view their data as quickly as possible?