I'm creating a web application that receives user input. Because data is being submitted by a user, I have to be careful that they aren't passing me bad data.
For example, if I have users, each of which have a bookcollection, I have to make sure that the users can't pass me a bookcollection_id that isn't theirs. This involves a lot of long joins and selects to see who the owner of a record is.
If a user is modifying a custom book that they already have, the form sends me custombook_id and booktitle.
I have to jump from custombooks_collections table to bookCollections_users table in order to find out if the user has the right to modify the custombook_id.
This isn't so bad here, but I have cases where the chain is closer to 4 or 5 tables down from the users table. Would it be ok to violate the second normal form here and include the owner's user_id in a table further "down" the chain?
Or should I not worry about having to traverse 4 or 5 tables for each input to find out if the user can modify that part?
I would suggest that you only make one round-trip to the database, but use as many tables as you need in order to get the right information. The difference between one table and a dozen of them is trivial in every database engine that I've used.
Applications come and go, and even the ones that stay around change to adapt to users needs. Data tends to stay around from the dawn of time in many cases, and though at least five generations of application code in nearly every case I've seen...
If you get the data storage (schema) right, you can build many applications on top of it, and can even support multiple generations of the application (I've done that more times than I can count). Don't sacrifice long term usability for short term convenience, it will kill you if you stay around long enough!
I guess I was worried about the complexity of my SELECT statements in order to retrieve the required information, but I guess that's the least of my concerns. I'll stick with making sure the database is well designed.