Results 1 to 4 of 4
  1. #1
    Join Date
    Mar 2006

    Unanswered: Split Access97 getting large and slow...

    I have an Access97 app thats been running for about 8 years with minor tweaks along the way. It is a split front-end back-end with only tables on the back-end server. Users are experiencing lag of about 15-30 seconds when opening forms based on the largest tables (18,000 records relationally linked through three primary keys to 90,000 subrecords) The total mdb back-end size is about 40Megs.

    The old data becomes less necessary over time, so I am thinking of just 'archiving' data over to another set of tables that would be accessable through the control panel on the front-end. (User would be warned against possible 'lag' when dealing with the huge archive). This would reduce the size of the main tables.

    I do realize that the best way to solve my dilemma would be to go to SQL on the back-end so that the server performs the queries. I plan to do this eventually, however, i need a fix now, as I don't want the users to wait for me to recode this entire app. It is large and it would take a bunch of time, especially as it is not my main job function at my company.

    A couple questions:
    1. Should I create the archive tables in the existing back-end and just link to those tables the same as i do now?

    - or -

    2. Should I create a separate archive database which users would start up via a different icon on the desktop?

    My concern is how much data is sent over the wire with linked tables. Is it the entire database, or just the tables being acted on? If it is just the tables, then it might be best to keep the archive tables in the main back-end database. If the whole database gets flung across the network on each query, then a seperate DB would be the better choice.

    What do you think? How would you handle this? Any better ideas?


    PS. Hardwarewise, we are getting our network and user boxes converted to Gigabit Ethernet (from 100Mb). The server is Gigabit - the switches are Gigabit, but not all boxen are Gigabit yet.

  2. #2
    Join Date
    Mar 2006

    Is this database, BE and FE, regularly compacted and repaired?

    You say the BE is 40 megs but only has around 100K records.

    I've worked with 1m records and the database hasn't reached that size.

    Also could this actually be a network issue?

  3. #3
    Join Date
    Mar 2006
    Thanks norie. This db is unique in that it basically is an accumulation of data. Once data is input, it does not frequently get edited or deleted. Therefore, I tend not to compact it often. I did just now, and I did get a reduction to 31Meg. I will try it to see if there is any speedup.

    The DB has more tables than the ones I mentioned, in all there is about 160,000 records in the DB. Each record is a mix of text and integer, about 30 fields each. No memo fields though.

    Yes, the network may be bottleneck, but as I mentioned all our switches are now Gigabit.

  4. #4
    Join Date
    May 2005
    I have been searching for an answer to this for almost a year and haven't gotten anywhere.

    new computers, new routers, new switches, new cables...

    reimported tables to a "fresh" access file, split backend tables to two different computers (based on the type of info it is holding and how much),....

    I've even tried different types of service providers, DSL vs Cable.

    I would love to get my employees to stop harassing me so if you find any more information please post... thank you

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts