Results 1 to 5 of 5
  1. #1
    Join Date
    Aug 2010
    Posts
    57

    Question Unanswered: Optimizing server lookup tables

    I have an Access front-end, SQL server back end application that is relatively slow over our WAN. I've been looking at the "Optimizing Microsoft Office Access Applications" document available from MS, and one of the areas it notes is to decrease the overhead associated with lookup tables.

    My application does indeed have a number of lookup tables which will rarely if ever change. When they do need to change I can send out a "server reboot" sort of message, so the front-end doesn't need to be smart about monitoring for changes.

    As far as I can gather there are several options available to me in order to perhaps optimize the lookup tables used by drop downs and the like. I'd be interested in hearing pros/cons of each approach.
    • Use Snapshot recordsets for the pulldowns. My understanding is that this happens automatically - but when I look at the query generated by Access for the drop down it appears to be a standard dynaset.
    • Use pass-through queries to populate the pulldowns. I could see this being marginally faster, but it seems to me every time I open up a form the query still needs to execute. Part of my performance problem is that certain forms open and close each time the user is editing a record. Thefore I'm not sure if a pass-through would really help.
    • Create a local copy of the table. I'm leaning in this direction, but I'm concerned about potential gotchas. I wonder if you all think this is a good option.

  2. #2
    Join Date
    May 2005
    Location
    Nevada, USA
    Posts
    2,888
    Provided Answers: 6
    I have a couple of applications that are run via VPN from outside the building (also Access/SQL Server). They actually work pretty well. For the lookup tables, I went with the last option, with a twist. I empty/reload the local tables on application startup. I throw up a form with a "Please wait" type of message. It only takes a few seconds to load them (that would obviously depend on how much data we're talking about). The potential gotcha would be if you were using linked tables and had a query that joined a SQL Server table and a local table. Access would have to bring the whole linked table over the wire. I use either stored procedures or pass-through queries to avoid that.
    Paul

  3. #3
    Join Date
    Aug 2010
    Posts
    57
    Ended up implementing the method @pbaldy suggested. Fundamental assumption is that the lookup tables for drop downs are relatively static - only occasionally, if ever would they change. For example, list of states is not likely to change any time soon. Employee titles are also fairly static, as it requires action by HR to add a new title.

    For all of these lookups I create a local table. Upon startup, I dump the existing data and bring fresh copies of the data from the server. This all happens while a splash screen is up. This takes a very short amount of time, since all the lookup tables (almost by definition, since they populate combo boxes) are quite small.

    To avoid the problem of running a query that includes a join to between a local table and a linked table, I've made the PK of the local tables be a natural key with the text that is usually desired in the link (in fact, most of the lookup tables don't even have other columns). This way, there is never s need to join to the lookup table - the information is already captured in the FK field.

    As a tip to others:
    I implemented this in the same function I use to create DSN-less connections. I have a local table that lists all of the server tables in my database. The master table captures for each table its server name, its local name, and whether the table should be linked or cached. I run this at app start to ensure all my table links and local tables are valid. This also allows me to very quickly change a table from a local copy to linked and vice-versa.

  4. #4
    Join Date
    Nov 2004
    Location
    out on a limb
    Posts
    13,692
    Provided Answers: 59
    depedns on the topology of the organisation
    one technique Ive used in the past is to have a local (branch) datastore for this sort of static data. its populated by a batych process at intervals requested by the users (eg could be overnight, every 2 hours, whatever, controlled by a batch process running on the server
    that means you reduce the startup dealys, you are not creating individual copies of data so that there could be a risk of local data corruption because local stores are out of step.
    I'd rather be riding on the Tiger 800 or the Norton

  5. #5
    Join Date
    Aug 2010
    Posts
    57
    @healdem - thanks, suggest this AFTER I've gone through the effort of setting up the local copy code

    Seriously, though, I think what you are suggesting has some real merit. Next major iteration it is probably worth considering!

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •