Slow Access Split Database on network issue (1 Viewer)

axelnorman

New member
Local time
Today, 01:03
Joined
Dec 5, 2022
Messages
3
I've developed a simple database for internal resource planning which is currently about 2.5MB. Everything is super responsive when running this as a single file on the office network, even if accessing the office network through VPN.

The database will now be used by 4 department managers and I therefore decided to split the database to reduce risk of corrupting the database and to provide me a better way of developing new front ends without blocking the users. The split database operation was successfull, but performance is now super,super low. Opening a form that would take 2sec with the database as one single file on the network, now takes 60sec when the database is splitted and stored on the same network folder. There is no improvement if the front end is stored locally and the back end on the network. I have done a lot of testing over 2 days but, no solutions:
  • Tried mapping BE using UNC path and drive letters, but no difference
  • Tried establishing persistent connection to BE without any measurable increase in performance
  • Performance is as bad independently on whether I am hardwired to office network or connecting through VPN
  • Moving both BE and FE to local computer solves the speed issue.
Then I tried one workaround to narrow down the problem:
  • Front End on local machine
  • Made a symbolic link to back end. The real BE is on network, while symlink is on local machine (Using MKLINK from command prompt)
  • Linked FE to BE using this symlink. The FE then believes it is accessing a locally stored BE, but in reality the BE is on the network.
  • this totally FIXED the performance issue!
How can this be? All BE access is still going through the network even if the network access is "hidden" for Access. I am tempted to conclude that the performance issue is therefore somehow related to how Access access files on the network?

Unfortunately this is not a workaround I can actually use since all lock files for BE is then placed locally. I also tried to make a symbolic directory link, since the lock files would then go to the network drive, however, this gave me the same poor performance.

Any clues on how to the bottom of this split database performance issue?

Axel
 
Last edited:

Minty

AWF VIP
Local time
Today, 00:03
Joined
Jul 26, 2013
Messages
10,371
The correct set-up should be a local front end on each PC with network shared back end file.
Does everyone have the correct permissions for the BE storage location, is the FE in a trusted location?

Ideally use a hard-wired connection, wireless can be troublesome. Faster the better.
Is the network file server a dedicated unit or something like a stand-alone NAS box?
Are your forms loading all the records from the underlying tables?
I'm unsure of what you meant by a symbolic link?
 

arnelgp

..forever waiting... waiting for jellybean!
Local time
Today, 07:03
Joined
May 7, 2009
Messages
19,245
consider creating a "Persistent Connection" to the back end.
 

axelnorman

New member
Local time
Today, 01:03
Joined
Dec 5, 2022
Messages
3
The correct set-up should be a local front end on each PC with network shared back end file.
Does everyone have the correct permissions for the BE storage location, is the FE in a trusted location?

Ideally use a hard-wired connection, wireless can be troublesome. Faster the better.
Is the network file server a dedicated unit or something like a stand-alone NAS box?
Are your forms loading all the records from the underlying tables?
I'm unsure of what you meant by a symbolic link?
  • Trusted location - Yes
  • Correct permissions - Yes
  • Front End locally and BE on network - Yes (also tried FE&BE on network- same issue)
  • Network server: I don't really know what kind of server it is. it's just provided by IT department and I haven't been able to get hold of anyone in IT that could answer. It is at least a dedicated server. But we have outsourced IT so difficult to find the details.
  • If my forms are loading all underlying tables? Not sure what you mean here. I have forms with recordsource set to tables, and some with recordsource queries.
  • Symbolic link: Using MKLINK to create a "shortcut" that actually just points to a file in another location. Access think it is accessing a local file, but the symlink redirects to network location.
It seems to me that this symbolic link test is a strong indication that the performance issues is closely related to poor Access handling of network access to files. But I'm not sure how to move forward since I need lock files to be on network and that is not the case when I use symbolic link..
 
Last edited:

Minty

AWF VIP
Local time
Today, 00:03
Joined
Jul 26, 2013
Messages
10,371
If the back end is open then the lock file for BE will be open wherever that is stored, not locally.

So if that Symbolic link cures your problem, I don't see what the problem with using it would be.
If it's getting you connected to the correct BE data path, then use it.

I've seen some strange behavior with redirected folders in corporate environments.
 

Pat Hartman

Super Moderator
Staff member
Local time
Yesterday, 19:03
Joined
Feb 19, 2002
Messages
43,293
Make a backup.
Start with the master copy of the FE. Delete all the links. C&R. Hold the shift key down and open the FE. Create new links to the BE on the server. C&R. Does that fix the problem?

Using a Jet/ACE BE with a VPN will always be painfully slow. If you must use a VPN, switch to SQL Server for better performance.

1. You don't need a dedicated server for the Access BE
2. If you can't open the lock file in the same folder as the BE, then the database CANNOT be shared.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 00:03
Joined
Sep 12, 2006
Messages
15,658
Tried it, but it has no effect on performance
How are you creating the persistent connection. This is generally the issue with a multi user database. It's worth making sure. Get all but one user to disconnect. If the speed returns to normal then it definitely indicates a persistent connection issue.
 

Privateer

Registered User.
Local time
Yesterday, 19:03
Joined
Aug 16, 2011
Messages
193
I wanted to continue this thread because I have a few questions on this issue as well. The database and I are in Connecticut, USA, and people in Europe and India are complaining about the speed, about 30-60 seconds to open a form, using ten to fifteen hops, most of them are triple digits. When I spoke with another Access developer, he said Access retrieves data differently than other applications and that the latency or the method of retrieval is the real speed bump. His comments were puzzling, so I am asking if anyone has the deep dive on how Access retrieves data? And I will try the persistent connection thing. The concern there is that someone might lose their connection, the locking file never goes away, and I can't take a copy for testing or backup, nor can I delete it because I don't have rights to that file, yeah, that's a network thing.

The other question is really asking for the best solution. Everyone is working remotely, so obviously the ISP is suspect as is pulling data from half a world away. Can ODBC be used to link to an Access BE if it's just on a network share? Is moving it to SQL Server the best answer? Is there a way, besides TraceRT, to test the company's network and find the speed bumps? I am worried about spending days migrating the tables and queries into SQL Server only to see no increase in speed because something on the network is the real culprit. As always, any help would be appreciated.
Thanks,
Mike
 

Minty

AWF VIP
Local time
Today, 00:03
Joined
Jul 26, 2013
Messages
10,371
If you have people in far-off places the only real solution is remote desktop.
From experience even with a Microsoft-hosted Azure SQL back end the latency from parts of India to Europe is poor - 300-500ms, which makes for poor performance, even when you have done everything to mitigate the effects.


Bear in mind these numbers are from Microsoft site to site, which will be on a pipe the size of a container ship, compared to adding a couple of local infrastructures on each end.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 00:03
Joined
Sep 12, 2006
Messages
15,658
I wanted to continue this thread because I have a few questions on this issue as well. The database and I are in Connecticut, USA, and people in Europe and India are complaining about the speed, about 30-60 seconds to open a form, using ten to fifteen hops, most of them are triple digits. When I spoke with another Access developer, he said Access retrieves data differently than other applications and that the latency or the method of retrieval is the real speed bump. His comments were puzzling, so I am asking if anyone has the deep dive on how Access retrieves data? And I will try the persistent connection thing. The concern there is that someone might lose their connection, the locking file never goes away, and I can't take a copy for testing or backup, nor can I delete it because I don't have rights to that file, yeah, that's a network thing.

The other question is really asking for the best solution. Everyone is working remotely, so obviously the ISP is suspect as is pulling data from half a world away. Can ODBC be used to link to an Access BE if it's just on a network share? Is moving it to SQL Server the best answer? Is there a way, besides TraceRT, to test the company's network and find the speed bumps? I am worried about spending days migrating the tables and queries into SQL Server only to see no increase in speed because something on the network is the real culprit. As always, any help would be appreciated.
Thanks,
Mike

What @Minty said.

With WAN's you might be able to improve things by being very careful with dbs design to minimize the amount of data that will be transferred. Access will try to transfer data from the server to the end user, so you need to transfer the smallest amount of data possible. If there's no alternative, it's better than nothing.

You might be able to distribute data sets to the remote locations, and not have real time updating, but that might not be practical. Merging distributed data is complicated. If the remote locations only need to read data, that might work.

That's why terminal servers work, because the TS is sharing the same LAN as the data, and all that gets transferred to the end user is the screen refresh.
 

Pat Hartman

Super Moderator
Staff member
Local time
Yesterday, 19:03
Joined
Feb 19, 2002
Messages
43,293
I am worried about spending days migrating the tables and queries into SQL Server only to see no increase in speed because something on the network is the real culprit
Migrating the tables takes a few minutes. It is not necessary to migrate the queries. If any are particularly slow, you can work on them later.

The problem with converting to SQL Server is that if you have not used good client/server techniques to design your application, the conversion will make the app worse.

You can spend a couple of hours to do a quick test. Just migrate the tables and see how the app performs on your local LAN.

But Remote Desktop or Citrix (even if you convert the BE) will be your best option.

One app I hosted in Connecticut using Citrix served people from San Francisco to Paris. The remote clients had better response than the local clients:)
 

Privateer

Registered User.
Local time
Yesterday, 19:03
Joined
Aug 16, 2011
Messages
193
Thank you all for the feedback; it helps to know I have been considering similar options as you guys.

To Minty, thanks for the speed test and the cloud suggestion. Unfortunately, the sensitivity of the data prevents any of this information from being stored in the cloud.

To Dave, I have tried to tighten the queries, I only bring over the record I need, rather that pulling a table and filtering, so I think I have that optimized, but there is always room for improvement, right. One option we considered was to relocate the server to Europe as it would be more centralized for the majority of users. We also considered distributed data sets, but the combining of daily data into the main database sounds like a nightmare. Others here have suggested copying the key working tables to the local front end, working that way, and dumping the new data back to the mother ship. Frankly, I have nightmares about making sure each record has its own primary key and other issues I can't even imagine. The flip side to that suggestion was to copy all the metadata tables into the FE, as they don't change that often, and keep the working tables in the BE where the PKs stay organized.

To Pat, I have only had to migrate the tables to SQL Server a few times, and even with that button, I still had to edit every table and identify the primary keys and setup the indexes. I have VBA code now to rename the links without the "dbo" prefix, but I did not have it back then. And I usually like to have the key queries as views on the server, so they get crunched there and as little data as possible gets transmitted. We are trying to get IT to give us access to a server so I can test a few forms that pull from big tables, but they are stalling. I suspect giving sa permission to an outside contractor is not something they are interested in doing either.

I think the final configuration will be a combination of the things suggested here. We will probably move everything to SQL Server and see how that works, and use Citrix, which we already have, for those on the other side of the world. Citrix also offers some added security as they have no access to any other parts of the network.

I thank you again for the time and effort you put into your responses, they helped. Any developments on my end will be shared as I think the situation here offers a unique opportunity to test various methods.
Cheers,
Mike
 

Pat Hartman

Super Moderator
Staff member
Local time
Yesterday, 19:03
Joined
Feb 19, 2002
Messages
43,293
To Pat, I have only had to migrate the tables to SQL Server a few times, and even with that button, I still had to edit every table and identify the primary keys and setup the indexes.
Then you did it manually rather than using SSMA. That free tool migrates everything you designed in Jet/ACE to SQL Server and renames the linked tables:) All the tables, relationships, and indexes. I don't know if it converts data macros into triggers though. If the schema is correct in Access, it will be correct on the server after migration. You do have to look at a few settings though. The default data type for datetime is NOT compatible with the standard, default ODBC driver so you have two options, specify the target datatype to be DateTime OR distribute a new ODBC driver to every single user. And for each new user you add later. If you don't have good IT support, it is easier to take a look through the conversion data types to ensure that nothing is getting converted to a "new" datatype that the old driver cannot handle. Until MS starts distributing a "new" ODBC driver automatically with Access is installed, I don't ever use any "new" SQL Server data types. Obviously, if I need to use a new data type, I have no alternative but to deal with the distribution of the proper ODBC driver.

If you use any DAO, you may need to add a few arguments to your queries. You can add them now, before the conversion since Access is OK with them being there. Just look for all the .OpenRecordset commands and add the arguments.

Set rsLtr = tdLtr.OpenRecordset(dbOpenDynaset, dbSeeChanges)

Once you do the straight conversion and some testing, you will know what you have to tweek.

Before I do a conversion, I ALWAYS go through the entire schema ensuring that required fields are required, defaults are set where needed, AllowZLS is set to No, RI is properly enforced, every index is correct (don't forget that Access automagically creates HIDDEN indexes for every field you use as a foreign key and even these hidden indexes are converted). If you made the mistake of letting Access automatically create indexes based on field name suffixes, you will have duplicate indexes for the FK fields if they end in one of the autoindex suffixes like CD and ID.

Since I never know when an app is going to later be required to upsize to SQL Server, I create ALL apps as if the BE would be SQL Server from the start. Makes my conversions trivial instead of traumatic.
 
Last edited:

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Yesterday, 18:03
Joined
Feb 28, 2001
Messages
27,189
Based on network issues, having the FE and the BE BOTH on a remote host is a disaster waiting to happen, split-file or not. You should always have the FE file local to the user's workstation. You should also be sure that your queries are set for optimistic locking.

Keeping a split FE as a remote file to reduce locks is a chimera even if you have a separate remote copy for each user. It buys you nothing, safety-wise, and doesn't reduce the number of locks, because the danger is basically the window of opportunity for the BE (which contains the shared data) to encounter some type of interference. Having the FE on the same box as the BE lengthens that window whether it is the same FE or different FE files. The issue is the amount of network traffic for file lock management.

The reason is that file locking in Windows in this scenario is using a DISTRIBUTED LOCK MANAGER (DLM). That means, in practical terms, that the host holding the file local to itself is also the one managing its locks. All external (i.e. networked) users are using remote lock management. I.e. Access is communicating with the DLM over the network, whereas if it were managing locks locally, it would be operating at backplane/memory speed as it talked to the local file system drivers. If you have 100 MBit Ethernet at your facility, this means that file locking slows down not less than ten-fold vs. memory speed, and maybe as much as 100-fold. Most people probably don't have 10 MBit Ethernet any more, but if you have that, it adds another order of magnitude to the slowdown. The only way to assure maximum file speed for Access is to have local copies of the FE file, one per user.

The speed of this remote-host interaction depends on the slowest link in the network, BUT it also depends on how many locks are involved. The number of lock interactions for a given file varies directly as the number of users, and unlike table searches that have an expectation value of searching "half the number of elements" on average, file locking requires every user's interaction with every other user. Basically, EVERY USER of a shared file must concur with the file's usage.

There is no improvement if the front end is stored locally and the back end on the network.

This tells me that there must be a serious network hardware problem or the FE must have been mislinked in some way. The lock dynamics of the Windows File System (probably NTFS) simply don't work that way. Therefore, the real nature of the problem is somehow being obscured. If your SYMLINK operation improves matters, I don't know how it does so, since physically the BE file location hasn't changed.
 

Solo712

Registered User.
Local time
Yesterday, 19:03
Joined
Oct 19, 2012
Messages
828
  • Network server: I don't really know what kind of server it is. it's just provided by IT department and I haven't been able to get hold of anyone in IT that could answer. It is at least a dedicated server. But we have outsourced IT so difficult to find the details.
Hi, I suspect the performance issue has to do with the server configuration. This is because you get two dramatically different response times by two different addressing methods. It could be that the softlink hits the i-node directly, bypassing a firewall or a load-balancer or some such. This of course could be a security breach but the slow response of the hard link could also be a result of a configuration error on a network device. I would definitely contact the network guys and tell them of your discovery.

Best,
Jiri
 

Cronk

Registered User.
Local time
Today, 09:03
Joined
Jul 4, 2013
Messages
2,772
Based on network issues, having the FE and the BE BOTH on a remote host is a disaster waiting to happen,
I'd say that needs to be qualified. Agreed when users are sharing the one FE. However, there are circumstances where users do not have a C drive on which to store the FE. One is where corporate IT blocks Access dbs from running on C drives. Another is working remotely via thin wire connections where everything is run on the host server.

In such cases, I have users running the FE on the individual H drives on the server.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Yesterday, 18:03
Joined
Feb 28, 2001
Messages
27,189
OK, Cronk, I'll qualify it. Having the FE and BE on the same host is a disaster waiting to happen unless:

1. You are using something like RDP or CITRIX... AND the FE files are kept separate for each user.

2. You have not less than 100 Mbit Ethernet LAN connections with no more than one or two hops internally AND every user has their own FE file in a private (i.e. non-shared) folder.

As to "IT blocks Access DB from running on C drives" - it is time to talk to IT about allowing a partition on the C drive to allow users local files under another drive name. I had to do that for a U.S. Navy site hosting some computers classified SECRET, so the IT staff got their panties in a wad every time I came to talk about configurations. But I got 90% of my needs met by being persistent and discussing application of "best security practices" as applied to Windows and database applications.
 

KitaYama

Well-known member
Local time
Today, 08:03
Joined
Jan 6, 2022
Messages
1,541
I'd like to add one more situation to what @Cronk suggested.

We are on a windows domain and all clients login to domain on startup. All client's MyDocument, DeskTop, Mypicture and all libraries are set to specified folders on a file sharing server. It means the only thing that resides on clients side is a C drive with OS & applications on it.
It's because of in case of any failure, IT refreshes the client from a saved image and all documents and files are intact on server. Then they change some registry key and set the defaults to specified folders on server under the same user name. It makes them to bring back everything to its normal situation in less than 30 minutes in case of troubles.

When we save some files on our desktop, myDocument etc. it actually is saved on the server. It means our FE is on the server too. Since we are on a wired LAN, we don't feel any performance problem at all.
 
Last edited:

Users who are viewing this thread

Top Bottom