Unanswered: How many records is too big for a table?
I have a site that uses a wordpress backbone. For anyone unfamiliar with Wordpress there is a table called post_meta that holds a lot of info. On my site it holds the main bulk of the information and currently stands at around 10,000 records.
I have no idea about scaleability of SQL and wondered just how many records is 'healthy' in a table and how many is too many...?
I've done a bit of research on Google and heard about 'partitioning' tables but have no idea how this works or when it is necessary.
At what point does the number of records start to really slow the site down? Or become unweildy??
I realise that 10,000 isn't too many at the moment but I want to get the structure in place now to allow the site to expand to a much larger size.
Am I right in thinking though that if you had a table of 10,000,000 records the searching of that table would be a lot slower and therefore slow the website down?? Or does it use some sort of clever algorythm to search that doesn't slow down that much??
do you have an index on that table which would allow you to isolate that 10000000 rows down to a couple? If not, then even splitting the table up via some type of partitioning wouldn't help you either as you may be looking through the majority of records.
I do a couple of different searches on the site but they normally use an ID to search for. I have a field called post_key and a field called post_meta.
So, for instance, I would search for all the post_meta that corresponds to a certain post_key.
Is that what you mean by "do you have an index on that table..." ?
Some of the searches are just for the 'latest 6 records' etc.
But I also offer my visitors a chance to search on the site and, that instance, the query searches the entire database for a certain name or whatever... I suppose that last query would be severely impacted by a much bigger database?
Thanks again for all the help and advice and please excuse my ignorance in this area.