Hey guys,

I've been breaking my head over this so I decided to see if I could get some opinions of others and found this forum.

I'm a programmer used to dealing with Mysql db's with max a few 100k rows but I'm working on my own start-up product and really want to make the right decision here.

Let me start off by mentioned speed and data privacy are 2 very important issues.

I have 2 tables both the same number of rows (will be in the millions possibly tens-of-millions)

Table 1
id (int) - ownerid (int) - sourceid (int) - encrypted JSON object (with 25 fields A to Y) (those fields will be user data like email, IP, address etc etc)

Use case (low - moderate usage):
user is logged in, all records he owns are selected from table (based on ownerid), then decrypted (using his auth details) and displayed in a (ajax) table where data can be edited, deleted and new rows added. On save the plain data is encrypted and hashed. The encrypted data is saved to Table 1 and the hashed data to table 2.

Table 2
table1id (int) - A - B - C - .... - Y

table1id is FK to table 1 'id' and A - Y are same fields as in the Table 1 JSON object but the values will all be hashes (based on the unencrypted data)

Use case (high usage):
a user makes a query (restfull API) and provides values for example for fields B, C, E, K, L
Now I hash that data and then what I need to do is first cross check the combination of those fields to existing rows in table 2 and then check the existence of the provided fields separately in table 2.

My first idea was to do Table 1 just in Mysql (hard to scale), and table 2 as RT index in SphinxQL (easy to scale)

However I'm doubting if a Mongo DB or perhaps Amazon DynamoDB & CloudSearch would be a better solution.

So my real question, concerning the above use cases which would the best Database models for the 2 tables?
Relational, document store, search engine, wide column stores?

I'd really like to make the right decision before it starts growing so I highly appreciate your input.

Thank you!