Multi-frontend performance?

jhg

New member
Local time
Today, 15:26
Joined
Jul 19, 2022
Messages
4
I have a split database with a single back-end accessed from two identical copies of a front-end. The back-end and front-ends are all located on a 1Gbps LAN. In this configuration, performance is abysmal because all query execution happens on the front-end systems, and involves shipping all the data (indexes and table rows) across the network. Even if all the necessary indexes exist, there's still too much data transfer across the network and performance is unusable.

With the front-end and back-end on a single machine performance is excellent, so my solution has been to host the database and front-ends on a single system, accessed via remote-desktop from the client systems where the front-ends "should" run (and would, if there was a real database server at the back-end).

We investigated converting the back-end to MSSQL Server, but since it was originally developed over 15 years ago it uses a lot of DAO and the effort to convert was deemed too large to be justified. We will eventually rewrite it, but in the interim I was wondering if there are any other tricks that would improve performance in the pure LAN configuration, i.e. front-ends running on the client systems without a database host and remote desktop
 
OK, looks like you didn't get an answer from UA, so I'll offer this thought.

Based on this statement: "With the front-end and back-end on a single machine performance is excellent" plus "I have a split database with a single back-end accessed from two identical copies of a front-end. The back-end and front-ends are all located on a 1Gbps LAN. In this configuration, performance is abysmal"

I had a 40-user FE with a shared BE on a 1 Gb Ethernet and performance was pretty good. Not always 1 second, but usually not more than two or three seconds for typical actions. I had about 600,000 records active at any one time, with daily growth of maybe 3,000 records added monthly and an equal number removed by archiving and cleanup. That description makes me think that your data-access pattern must involve a lot of network churning. That is to say, do you keep the BE open between queries (as long as the FE is open)? Or do you do a lot of implicit opening and closing of the connection? Because your described behavior sounds like a network-based locking nightmare.

One thing that I did was when my FE opened, there was a BE table that stayed open as long as my switchboard was still open. What happens when you do that is that the FE does not have to establish a new network connection to the BE for every action. Which ALSO means that the Windows file lock stays attached to the file and you don't have to go through access arbitration. To prevent too many lock collisions, I always set everything to Optimistic locking for updates, deletions, and inserts, and No Locks for all reports and SELECT queries.

What is the pattern of your typical use?
 
@The_Doc_Man describes this in his last paragraph:

Frontends should be installed LOCALLY, as should the necessary Access installation. This is of course different for a terminal server, but even then each client should have its own frontend with its own Access.

Otherwise, the data transfer rate between FE and BE over a network will always be slower than if both are on the same computer. This will make weaknesses in the programming more apparent, especially queries (query design, index usage).
 
agree with the comments about locating front ends on the local machine - not only are you bringing across data but also all the forms and other access 'stuff'

involves shipping all the data (indexes and table rows) across the network.

this suggests to me that your forms are bound to tables for subsequent in-form filtering - you should be using queries with criteria to minimize the volume of data required to transfer across the network. There are also other things to consider - see this link

 
It was late at night when I read your post initially, so I totally missed that the FEs were ALSO remote. (I apologize for missing that crucial fact.)

You are REALLY stressing your network when you do that because of the rule in Windows that file locking is always managed by the machine hosting the locked file. I'm not talking Access locks (yet). When you have a remote FE and are not using something like RDP (remote desktop protocol) or Citrix (which uses RDP), then every new touch of the FE file has to ask another Windows machine to arbitrate another file lock for you. Using Gigabit Ethernet means you are looking at 100 Megabyte transfers. Since modern computers are typically running Gigabyte memory speeds, you have just slowed down every file transaction by a factor of 10 just because it has to go over a slower medium than a memory bus. But it's worse than that.

Quick overview: Moving the FE files to be local to their users means that certain things will occur differently.

First, local files do in-memory lock checks so that will be a minimum of 10 times faster. Doing that remotely, the lock DB in the remote machine is the same speed but getting to it now involves a network hop which involves a handshake protocol. So there and back again for each lock check... make that 20 times speed difference. And Windows is a stern taskmaster. You don't touch a file without access arbitration.

Second, if that FE file is local, nobody else will try to access it, so you are nearly guaranteed there will be NO lock collisions. There CAN be no collisions. So that means that even the in-memory arbitration will have no competition.

Third, Access has a file of type either .LDB or .ACCLDB that it uses for managing internal locking. The name will match the FE file's name. We refer to either file as "the lock file." If the FE is local, the lock file is local. If the FE is remote, the lock file is remote. So that 20-fold speed difference just got more like 40-fold difference because remember that your copy of Access is running locally so it has to touch both the FE and the lock file as separate, sequential actions.

Fourth, the odds are pretty good that your FEs that were coming from the remote server were at least partly sharing a folder path (even if you had separate leaf-node folders, one for each user). That means that at least PART of your file paths overlapped and that meant Windows remote lock arbitration because both users were accessing the same folders to get to where the FE files were kept. The network exchanges for mutually remote lock arbitration are HORRENDOUS.

My other comments in my earlier post are still valid. Thanks for the link, @ebs17
 
Converting the DAO to operate with SQL server shouldn't be that hard. You will probably have to add dbSeeChanges to a lot of queries. But the issue is more likely to be that the application was not built to optimize database access so simply converting the BE to SQL server will not only not help but is very likely to make performance worse.

You will get the most bang for your buck by determining the worst bottlenecks. The most expensive operation any application performs is I/O so that is where you start your optimization. Then examine all loops to ensure that code that should run ONCE is outside of the loop rather than inside. Find the forms that are used the most frequently and change them so that they are bound to queries with where clauses so they only select the data you actually need. NEVER select an entire table or use a query with no where clause and filter locally. This is the worst thing you can do.

Make sure that relationships are defined and that indexes are appropriate for the data used for searching. Over indexing is almost as bad as underindexing.
 
I don't think the network is necessarily to blame.

Maintaining a persistent connection with multiple users is probably the biggest issue.

Writing queries to minimise the number of records that are returned is important.

Also, use a wired connection rather than wireless.
 
I don't think the network is necessarily to blame.

I agree. It is the particular (mis-)use of the network that is giving trouble by requiring not only data but also infrastructure (including code) to be downloaded at every turn.
 
Whilst this is unlikely, and I have mentioned it before, make sure the data is not going in a roundabout path.
That happened to me in a company I worked for. Despite our server being at the side of my desk the network was quite slow. Turned out that the data was going to Coventry and then back to Birmingham. :-)
 
Good point, @Gasman ... I had the same thing happen with the Navy because our server in New Orleans was upstairs two floors from my workstation in New Orleans, but because of the network I was on and the network the server was on, we had to go through a firewall in Norfolk. AND as it happened, there was at least one slow-hop network segment in that run. We changed how we ran things slightly to take into account a security mandate and moved the BE to a server on a different network. (Had to change from using an Outlook mail method to a CDO mail method, too complex to explain here.) things changed from abysmally slow to fairly peppy with a few occasional delays for folks running really large and highly detailed reports.

Abysmally slow meant "updates on single specific actions" taking 5-10 minutes. Fairly peppy meant "group updates on 50 different actions" taking maybe 2 seconds tops and sometimes within 1 second.

However, our OP reports that FE and BE are on the same LAN, so I'm kind of doubting the circuitous route is significant here.
 
Good point, @Gasman ... I had the same thing happen with the Navy because our server in New Orleans was upstairs two floors from my workstation in New Orleans, but because of the network I was on and the network the server was on, we had to go through a firewall in Norfolk. AND as it happened, there was at least one slow-hop network segment in that run. We changed how we ran things slightly to take into account a security mandate and moved the BE to a server on a different network. (Had to change from using an Outlook mail method to a CDO mail method, too complex to explain here.) things changed from abysmally slow to fairly peppy with a few occasional delays for folks running really large and highly detailed reports.

Abysmally slow meant "updates on single specific actions" taking 5-10 minutes. Fairly peppy meant "group updates on 50 different actions" taking maybe 2 seconds tops and sometimes within 1 second.

However, our OP reports that FE and BE are on the same LAN, so I'm kind of doubting the circuitous route is significant here.
Doc, TBH, I thought we were on the same LAN, but after I complained about how slow it was and they investigated, they found the issue.

It is unlikely I admit, but if all else fails.
This week I had to call my ISP as I could not get into the router. Chrome said the connection was being refused.
Whilst I was waiting to get through, I thought I would try another browser. Lo and behold that worked !, so it was Chrome at fault. Cleared cache for the last 7 days and it was working again.
 

Users who are viewing this thread

Back
Top Bottom