I have been doing some reading on this, and according to MS, it is faster to actually requery the database if you know what data you are looking for, but I find that hard to believe. My example is this, and I would love some feedback on this.
Example: I have 6 million records in a database but I only want the data from October 1st thru October 31st. Lets just say the recordset for the data requested is 2 million records. Now within that data, I want to query for specific items over and over again to get a quantity within that time frame. Do you find it quicker to use the Filter method on the existing recordset, or requery the whole database for the data you know you are looking for? I have tried it both ways, and feel there is not much speed difference.
Does anyone have the fastest method for doing this? It would be greatly appreciated.
If you use a client cursor then the use of the Filter method on the existing recordset should be faster but the first call to the DB will probably be longer.
When you use the client cursor you are using the client's memory so there is no round trip around the network else you would have some pointers in the server indicating that those are the records you are working with.
Always try to be specific in your query (make your packages of data traveling around the network as small as they can be)... if your table has 20 fields and you need to display only 10 then instead of requesting all the fields be specific and request only those 10 fields. Don't request the whole table, request only the records that are needed for the procedure. if you are not going to update the data the use disconnected recordset (set connection to Nothing after retrieving the data)
to err is human ; to really mess things up requires a computer