I'm new to dBforums and hope someone can help with this project I'm working on for work.
The gist of it is: I'm querying a table of 1M rows (1 million rows)
Question: Is there an efficient way to query the 1M rows, if you disregard the query code (it is a very straightforward and simple code). What I want to know is, which is faster: To query all 1M rows with one query, or break it down and query perhaps 1000 rows at a time and repeat for 1000 times?
On a side note, does querying scale linearly? (I.e. if it takes 1 min to query 10 rows, does it take 10 min to query 100 rows?)
Do you guys have any recommended ideal query subsize (eg. query 1567 rows at a time) that maximizes the efficiency of this query.
Note: By max efficiency, I mean to minimize the amount of time it takes to generate the table of data using the query.
Thanks guys for all your help.