Slow Database Issue ... please help!!

traci

Registered User.
Local time
Today, 00:34
Joined
Jun 15, 2016
Messages
12
I have been doing Access development for 20 years and this is the most frustrating problem yet. I have a new database I have created from scratch. I have the Access backend on the server and Access frontends are distributed to users desktops. When I test it, it's lightning fast. As soon as only one other person opens the database, even if they are just on the main switchboard and not opening any other forms, it immediately slows down to about half speed! We have tried this with various people on various machines and it's the same result. There is a noticeable delay moving from record to record, clicking on tabs, etc. I admit the main form is somewhat complex ... lots of tabs with subforms. This is where the slowness mostly occurs. However, I have done just about everything the experts have recommended as far as improving performance and it has helped a little, but not enough. Still a very noticeable delay. I have done the following:

• Only loading one record at a time instead of all records (only loading the record the user wants to look at).
• Create a permanent connection to backend by having main switchboard based on small table with one record.
• Only loading subforms or combobox sources if the user clicks on the tab with that subform or combobox.
• Changed primary keys from Descending to Ascending (only had this on one table and a few related child tables).
• Replacing all domain aggregate functions (DCount / DLookup) to recordset counts or "ELookup" function in VBA and queries.
• Do more functions up front only once when the form loads, rather than on Form_Current.
• Add primary keys to all tables (I only had 3 tables without primary keys, but all have unique indexes)
• Replace Nz functions with IIf in queries
• Analyze Group By queries and make sure they are efficient
• Adding Option Explicit to all modules
• Any forms based on tables ... use queries instead if I don’t need all fields.
• Uncheck “Use Windows-Themed Controls in Forms” on database options.
• Turning on unicode compression for all text and memo fields (table property).

I have always had the basic performance enhancements in place ... turning off subdatasheets, turning off autocorrect, etc.

The other odd thing is ... if I test it only on my machine, but open three separate occurrences (three separate front-end copies on my machine, pretending to be three different users), it is lightning fast on all three, even when all instances are in the same large form and doing various things on the tabs.

I am puzzled and frustrated and not sure what to do at this point. I thought about possibly moving the backend to SQL Server, but some say that may not improve the speed. Any options on that? Could there be a server or network issue? The IT people said the bandwidth is 100MB, although one guy said in this day and age, it should really be 1G. Don't know if that is an issue or not.

Any help or advice is appreciated! Thanks.
 
A 1Gb network will make a surprisingly large difference to your database performance even over shared switches in Desk groups.
We noticed it on site and if you do some reading up it's not just the core network speed that makes the difference. The protocols on a 1Gb network are much more resilient and handle traffic in a much better way.
SQL server will make a much bigger difference though, and you can then start to move some of your background processing to it, which will improve matters further. TBH - do both.
 
Thanks Minty. If I could wave a wand, we'd have 1G bandwidth. Unfortunately, that's not even my department and I don't have magic powers either. :(

I have gotten some other feedback of things to try such as making the FE an MDE. The IT guys also said they tried something on the server and when it reboots tonight, they want me to try again. I'll be off site Thurs and Fri, so may try it Monday and see what happens.

But if all the above fails, I may just have to suck it up and move the tables to SQL Server. I just have a feeling this problem is larger than that, but who knows. I have done tons of these systems over the years at various places ... some with 15 concurrent users or more and never a problem. I've only worked for this company since last summer and first time I've run into anything like this which is why I suspect network or server. I'll keep you posted.
 
Any forms based on tables ... use queries instead if I don’t need all fields.

I'm perhaps a maverick here, but I NEVER use tables as direct recordsources to queries when in a FE/BE situation. NEVER EVER. Instead, I build the queries and set each query's properties to optimistic locking.

Even on a 100Mb network, performance isn't your issue because of your trick of opening multiple instances on a single machine. (However, to hedge that bet... open Task Manager for that case and verify whether you have one or three copies of MSACCESS.EXE running at that time.)

Replacing all domain aggregate functions (DCount / DLookup) to recordset counts or "ELookup" function in VBA and queries.

Not going to quibble about DCount in queries, but DCount in ordinary VBA code isn't such a burden. The overhead of recordset counts is almost the same. (NOT making that claim for in-SQL, only for in-VBA cases of domain aggregates).

Do you use drive-mapping or URS mapping when connecting to the BE file? If your domain permissions are set up correctly, URS is (slightly) faster.

I would also check the system doing your BE service. I would look for the BE's folder of residence to be multiple layers deep (which is a bad thing) and for the system tuning parameters to be set too low for heavy file traffic. You'll need to talk to that system's admin to assure setting have been balanced for shared file system access as opposed to simple "general" server access. (There is a small difference.)
 
A problem on the network will slow things to a crawl - it does here and we are on on SQL Server.
 
Thanks for your help everyone. IT said they have tried something on the server and are going to reboot it tonight to hopefully help performance. I guess they changed something. I have also gotten a few suggestions from other boards on thing to try on the front end. I will test again Friday and keep you posted.

P.S. Doc Man ... I use directly URL mappings for tables and they aren't far down the tree ...
\\ServerName\SharName\Folder
Thanks for your other suggestions as well.
 
Update on this. The server changes IT made did not work. We’ve done a lot of testing Friday and today with the database, trying to pinpoint the problem. The network slowness is especially noticeable when moving from record to record. So, we decided to do some speed testing based on that. Here is the testing we have done and some interesting results:

Test 1
1) User 1 opens database and opens record on main form. User 1 navigates to another record. Time to navigate between records: 1-2 seconds.
2) While User 1 is still in database, User 2 opens database and opens record on main form. User 2 navigates to another record. Time to navigate between records: 1-2 seconds.
3) While both users are in database, User 1 then navigates to a different record. Time to navigate between records: 6-8 seconds.
4) User 2 continues to have 1-2 second navigation time, while User 1 is slow.
5) User 2 then exits the database. User 1 navigation time is still 6-8 seconds. User 1 exits the system and gets back in. Navigation speed for User 1 improves to 1-2 seconds.

Test 2
Repeat steps 1-4 above, but step 5 is ... User 1 exits the system and immediately gets back in. Navigation speed for User 1 improves to 1-2 seconds. Navigation speed for User 2 immediately slows to 6-8 seconds.

Test 3
We repeated Test 1 above, except in step 2 … User 2 opens database, but stays on main menu (which is based on a small table) and does not open any forms or reports. Same exact results.

Test 4
We repeated Test 1 one more time, except in step 2 … User 2 opens the backend file manually on the server, but does not open any tables. Same exact results.

So ... the users don’t even have to be on the same form or even asking for data, as Tests 3 and 4 show. Just opening the backend file will cause the first user to slow immediately.

We also these tests with three users. Whoever opens the database last is fast the first two users immediately slow down. Any user that closes and reopens (therefore becoming the last user) will be fast and slow everyone else down. And when the very first user get slowed down, even if all other users get out of the system, it is still slow for that first user. That implies it’s still holding or locking something, even though those users have exited. Only after that first user closes and re-opens will it become fast again.

I also tried doing these tests logged on as me, but in two different locations (my desk and a conference room), just to see if the Windows login had anything to do with it. The results were the same. The machine that opened the database first immediately slowed down as soon as the second machine opened the database.

I also did a multi-instance test on my machine, where I opened two separate frontend files on my machine, connecting to the same backend. Interestingly, both instances are very fast no matter what I do. I confirmed it was running two separate instances of Access thru the task manager. So, it seems it’s slower with different machine locations. The login makes no difference.

I have never seen behavior like this. Any ideas as to what is causing this?? Is this a server or network setting? Someone here mentioned it may be a 32-bit / 64-bit issue (Office is 32-bit but Server is 64-bit). Any thoughts on that?
 
Not immediately obvious as a 32/64 issue. I have 64-bit Windows at home running a 32-bit version and there is no problem with it. All that would happen is that 64-bit Windows (any flavor) would open a 32-bit version of SVCHOST.EXE or something like that.

I looked back at this thread and realized we don't know certain things that are relevant. Can you please supply us with at least some of this information?

What version of Access are you using for Front-End and in what version was the Back-End created?

What version of Windows is each workstation using, and are they uniform? Is each user running the same version of Office?

What kind of domain authentication are you using? Kerberos? PKI? Username & password? Workgroup security? Something else?

When you deployed the FE, if it was at least Ac2007, did you attempt to customize the "Current Database" tab for the database before finalizing the "public" copy of the FE? If so, in what way?

I see a significant issue in that you have no performance slowdown for two or even three instances under the same user on the same machine, but have slowdowns for one user from each of several different machines. Further, users joining and leaving the database experience variability of performance when OTHER users join/leave.

You might have to get your system administrator involved here, because this is a very narrow question. What I smell here is a lock management issue in that, for some reason, your setup is very slow to arbitrate conflicting file system locks. You say that you maintain a "permanent" connection to the BE (your 2nd bullet in the original post), so the only thing that should be that slow would be data buffer movement. File Open locks should all be resolved once that "permanent" connection is up, but new users joining the mix/melee would have to compete for locks.

By any chance does your database include some relatively large MEMO fields, OLE-embedded objects, or BLOBS (binary large objects)? If so, check using the task manager to see if any of the instances that slow down also suddenly start growing in terms of memory used when you start to have this conflict and slowdown. This would be an indicator of a design issue and a busy workstation.

Another question that comes to mind: Are you doing intense VBA-based auditing for form changes behind your offending forms? Like, building a reconstructive audit history of all changes? That would cause buffer churning, which would lead to slowdowns as the memory manager starts to heat up a bit.

It is always tempting to compare Windows to other operating systems, and in the case of Dave Cutler (who designed and wrote the bulk of Windows NT), there is the odd chance (bolstered by other articles I have read) that he designed WinNT to be like OpenVMS in many ways. Mostly because it made for a good security template, but also it had some other features going for it. If so, your symptoms suggest to me that there is a lock conversion problem having to do with the lock owner not immediately releasing the lock, thus requiring increased lock negotiation. This would be accompanied by increased network traffic when the second and subsequent users join. The increase would be caused by having to negotiate with a lock owner when in fact the item should not be owned any longer than minimally necessary.

If it really IS a problem in lock arbitration, then the practical solution is to see if you have pessimistic or explicit locking in places where optimistic or NO locking would be more appropriate. Fix the excessive data and file locking problem and your responsiveness will improve.
 
I would double check the persistent connection

The following is the original XP slow performance thread, that may still have some useful ideas. The sharing violation delay was similar to the persistent connection. Not sure if that has relevance in later versions of windows.

https://support.microsoft.com/en-us/kb/889588
 
Gemma ... I did confirm the consistent connection. I monitor the folder where the backend is. As soon as someone opens the database, the locking file appears and does not disappear until they close the database. It does not disappear and reappear like it did before I created the consistent connection.

Doc Man ... good questions. Frontend and backend are both Access 2010 / created in 2010. All workstations have Windows 7, 64-bit. All workstations have either Office 2010 or Office 2013, 32-bit. All have same upates, patches, etc, as those are pushed down automatically to our machines from IT dept.

Domain Authentication is Username and Password, but it's a dynamic password (i.e. PIN + Token Code).

Access Options changed or filled in on Current Database Tab ...
Application Title filled in
Display Form filled in
Overlapping Windows
"Use Access Special Keys" thru "Enable design changes for tables ..." all unchecked
Display Navigation Pane ... unchecked
Ribon Name filled in
"Allow Default Shortcut Menus" unchecked
Name AutoCorrect Options ... all unchecked
Don't display lists when more than this number of records read: changed to 10,000

No property changes on "Client Settings". I use the default of Shared mode, No Locks, Open databases by using record-level locking.

Users only experience changes in performance when other users JOIN after them. In a nutshell, the last user in is always fast. The user who joined just before them immediately slows down. Anyone before that is already slow.

I have five memo fields total in the database (no OLD or BLOBs). If I add all the records up from the 4 tables that have these fields .... it's a total of 6051 records with memo fields filled in. Some very short, others a couple paragraphs. The longest memo field is 13,417 characters. The next longest is 5,598 and they go down from there. The vast majority are less than 1000 characters.
However, I do have all five memo fields formatted as rich text. That's what the users have in their old system and they would like to keep that feature. Is that an issue?

As far as auditing ... I only have two fields that are audited when changed and yes, they have an audit trail function behind them, but very small. But nothing else is tracked. But when we are doing these tests as described above, no one is changing any data!! They are simply moving from record to record (requerying the form to pull a different record). And I have very little code behind the Form_Current event. If that was the issue, I assume even one person would be slow.

When you talk about file locking issues ... are you talking about the Access ldb file, or are you talking about it from a server or network perspective and that's what the IT group needs to fix?
 
Finally found the problem!! I have four unbound boxes on that tab that have their control sources all set to complex DLookup functions. I forgot I had these as DLookups, which from what I read, can really slow the system down! I changed them everywhere but here. If I set all four to null, it is lightning fast with two users. Can't believe it. They refresh each time the user navigates to a different record, as the data in those boxes changes, so that's obviously what is slowing it down. I guess it had a harder time pulling this data when someone else has the table open??

So ... what I ended up doing is I moved those four boxes into a subform and then put that subform on the main form and is is linked by the main ID. I originally just based that subform on a query (which was based on another query), but that still seemed somewhat slow. So, now I have it where I do a make-table query (from those queries to get the data) and it created temporary / local table. It only runs the make-table when the user clicks on the tab the subform is on (or when they go to another main record and the tab is still active). You would think that would be slow, having to run the make-table all the time, but it is lightning fast! So for now, that's what we're going with. I did multi-user testing yesterday with four users and everyone was fast! I was very happy with the performance ... finally!

I am thinking the reason it is slow with doing queries directly on the table is because that table is somewhat large ... over 80k records and I have some expressions in the queries. It only needs to analyze about 100 records (those related to the main table) to get what it needs for the four boxes. So, when I do the make-tables, it only pulls those 100 records down and then does the expressions / functions on that small local table.

But I'm open to suggestions on doing it cleaner / better.

Thanks to everyone for their help!
 
Hi Traci,

The trick to to filter before Form. I have over a file with 350,000 records, maximum records per LOT is about 200. By filter on SLAP, LOT and Date there are no major performance issues.

Simon
 

Users who are viewing this thread

Back
Top Bottom