Results 1 to 7 of 7
  1. #1
    Join Date
    Oct 2004
    Posts
    4

    Unanswered: Multiple users over VPN

    I'm a relatively new Access user who is remotely connecting over VPN to an Access DB on a "fileserver" on a peer-to-peer network. The VPN connection is "always on", and I have a drive mapped to the fileserver. When I originally load the Access file, it takes about 2-3 minutes to be able to start working. If no one else is updating records, my speed is fine. If someone else updates a record, my Access session "freezes" for 2-3 minutes... coincidentally the same amount of time needed to load the DB in the first place. I assume this is because the entire DB is resynchronizing everytime a record is updated. Is there anyway to speed this process up? My updating records does not have a similar effect on the local users.

    Thanks for any help or suggestions,
    RJRon

    My computer: Dell laptop, P3-500, 128Mb RAM
    Operating system: Windows XP Pro at both ends
    Access version: Access 2003 at both ends
    My internet connection: Cable, download consistently over 1000kbps, not sure about upload
    Their internet connection: DSL, download consistently over 1500kbps, upload 500kbps

  2. #2
    Join Date
    Sep 2004
    Location
    Tampa, FL
    Posts
    520
    Is the database split? That may have an impact on the performance.
    Darasen

  3. #3
    Join Date
    Sep 2004
    Location
    Charlotte, NC
    Posts
    164
    Darasen makes a good point. If you have not split your database (one Back-end database with tables only, which will reside on the network, and one Front-end database with everything else {forms, queries, reports}, which can reside on each users machine), you should. Here is a good link in Microsoft's Knowledge Base.
    http://office.microsoft.com/en-us/as...874531033.aspx
    Be sure to click on the category 'Improve performance in a multiuser environment' to expand it.

    TD

  4. #4
    Join Date
    Dec 2002
    Location
    Préverenges, Switzerland
    Posts
    3,740
    a multi-user mdb (where multi is a very small integer, mdb is less than 10MB) can work. BUT the more complications you put in it's way (e.g. VPN) the more disgusted you will be with the perfomance.

    any smarts in your system is on the client: get used to this fact! the .mdb backend on your server is just a dumb stupid lump of bytes in a file that can't do anything. worse, if your client plays with the .mdb another .ldb (small) file is created on the server, and you need the .ldb as well.

    a simple SELECT blah FROM tblBlah WHERE blah = 1; returns the complete .mdb and .ldb to the client... frankly, this is a recipie for gloom and despondancy unless you have an unused 100Mb/s bandwidth, few users, and a small .mdb backend. i've got installations like this that sort-of-work in real life: 3 users, 5MB.mdb, dedicated LAN @100Mb/s ...it's ok, but adding remote users or 20 local users is not IMHO an option.

    for completely free (any reasonable old PC plus totally free soft), mySQL can improve the performance of a well designed application (i.e. not linked tables!!!!!!) by a factor of 50 for the first user and you wont notice a performance hassle until you add the 1000th user. M$-SQL-Server could be either half or twice as fast depending on your application, and it's easier to implement.

    izy
    currently using SS 2008R2

  5. #5
    Join Date
    Apr 2002
    Location
    Germany
    Posts
    228
    I think the problem you experience is mostly due to opportunistic locking (or OPLocks). While upgrading to a "real" database backend will defenetely improve the performance big time it might not be what you want to do immediately. OPLocks will read the whole file to a local cache and perform all changes on the local copy until a certain amount of data has accumulated to avoid sending lots of small packets. Hover if some other user commits any data (and may it just be one byte) to the file your cache will be invalid and the whole file will have to be reread again. Try turning them off, might do wonders http://support.microsoft.com/kb/296264/EN-US/

  6. #6
    Join Date
    Oct 2004
    Posts
    4
    Thanks everyone for your suggestions. We've split the DB with only a little improvement. The OPLocks suggestion was VERY interesting, since it seems to address the specific issues we are having, but we have been reading that turning OPLocks off could increase the risk of corrupting the DB if the remote users crash or lose connection (not unlikely in a VPN situation).

    Apel, have you had any experience actually turning off OPLocks without problems? Anyone else?

    We are not experienced SQL users and are concerned that we will be at the mercy of consultants for every issue.

    Thanks again for any feedback.
    RJRon

  7. #7
    Join Date
    Apr 2002
    Location
    Germany
    Posts
    228
    I was under the impression that turning them off should actually improve reliablity as data is immediately committed to the server. Where have you read it should be problematic? If you have OPLocks a connection interruption will let you lose all dirty buffers in the client cache.

    I can only tell of running without OPLocks on Netware and Samba servers where I had no problems so far. This is on a very relieable network so it's not really comparable. We'll be adding VPN access soon too so I'm interested in this as well.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •