Unanswered: Split Access97 getting large and slow...
I have an Access97 app thats been running for about 8 years with minor tweaks along the way. It is a split front-end back-end with only tables on the back-end server. Users are experiencing lag of about 15-30 seconds when opening forms based on the largest tables (18,000 records relationally linked through three primary keys to 90,000 subrecords) The total mdb back-end size is about 40Megs.
The old data becomes less necessary over time, so I am thinking of just 'archiving' data over to another set of tables that would be accessable through the control panel on the front-end. (User would be warned against possible 'lag' when dealing with the huge archive). This would reduce the size of the main tables.
I do realize that the best way to solve my dilemma would be to go to SQL on the back-end so that the server performs the queries. I plan to do this eventually, however, i need a fix now, as I don't want the users to wait for me to recode this entire app. It is large and it would take a bunch of time, especially as it is not my main job function at my company.
A couple questions:
1. Should I create the archive tables in the existing back-end and just link to those tables the same as i do now?
- or -
2. Should I create a separate archive database which users would start up via a different icon on the desktop?
My concern is how much data is sent over the wire with linked tables. Is it the entire database, or just the tables being acted on? If it is just the tables, then it might be best to keep the archive tables in the main back-end database. If the whole database gets flung across the network on each query, then a seperate DB would be the better choice.
What do you think? How would you handle this? Any better ideas?
PS. Hardwarewise, we are getting our network and user boxes converted to Gigabit Ethernet (from 100Mb). The server is Gigabit - the switches are Gigabit, but not all boxen are Gigabit yet.
Thanks norie. This db is unique in that it basically is an accumulation of data. Once data is input, it does not frequently get edited or deleted. Therefore, I tend not to compact it often. I did just now, and I did get a reduction to 31Meg. I will try it to see if there is any speedup.
The DB has more tables than the ones I mentioned, in all there is about 160,000 records in the DB. Each record is a mix of text and integer, about 30 fields each. No memo fields though.
Yes, the network may be bottleneck, but as I mentioned all our switches are now Gigabit.