MS Access vs other tecnologies (1 Viewer)

CJ_London

Super Moderator
Staff member
Local time
Today, 02:28
Joined
Feb 19, 2013
Messages
16,610
that works!

If user enters a *, then they will see all the records. But a) form load will not be slowed by needing to populate the combo with thousands of records and b) it is a users choice to enter a *.

It can also work on a data security front - if you disable the use of the * users will never be able to generate a full list of customers (at least from this combo) to take to a competitor say. OK, they can repeat the exercise 26 times, but that is more effort

You do get a missing operator syntax error if you click on the dropdown and select 'Type to search...' but that should be easy coding to resolve
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
that works!

If user enters a *, then they will see all the records. But a) form load will not be slowed by needing to populate the combo with thousands of records and b) it is a users choice to enter a *.

It can also work on a data security front - if you disable the use of the * users will never be able to generate a full list of customers (at least from this combo) to take to a competitor say. OK, they can repeat the exercise 26 times, but that is more effort

You do get a missing operator syntax error if you click on the dropdown and select 'Type to search...' but that should be easy coding to resolve

Interesting observation about the * but I doubt that anyone really needs a complete list, especially if there are tens of thousands of rows.

I corrected the error you found by changing the after update code to

If Not IsNull(Me.cboSearchCombo) And Me.cboSearchCombo > 0 Then

Thanks!
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
Question to anyone reading this thread:

In the demo I uploaded, I set the recordsource for the on-form subform (as opposed to the one on the tab that loads when the tab is clicked) to null, and then loaded the data after the client's record was accessed.

Is this a better approach rather than linking the master and child fields?

Another way to ask the same question: If I link the master and child fields, setting the recordsource of the subform to be bound to the sales table, will Access attempt to load or access the Sales table before the client's record is loaded?

Thank you
 

CJ_London

Super Moderator
Staff member
Local time
Today, 02:28
Joined
Feb 19, 2013
Messages
16,610
will Access attempt to load or access the Sales table before the client's record is loaded?
it will attempt to load whatever the recordsource specifies

then loaded the data after the client's record was accessed.

Is this a better approach rather than linking the master and child fields?
Assuming the subform allows for additions, I would do both - In the main form current event, load the data in the subform with a criteria based on the main form PK. And if the subform allows additions, ensure the linkchild/master fields are populated. One of the things these do is auto populate the FK in the subform table.

The alternative would be to set the default value for the subform FK based on the parent form PK (which you can only do if you have a control for the subform fk, hidden or not). Or you would have to populate the FK field in the subform current event or before update event, which means needing code in a form that perhaps otherwise doesn't need code.

Another approach you can use is to have a global DB object which is assigned current db when the app is first opened.

Then instead of setting a form recordsource you would assign a recordset.

I'm not sure which is quicker but it does mean you maintain an open connection

App opens
set DB=CurrentDB

a form open event
set me.recordset=db.openrecordset("SELECT * FROM myTable WHERE PK=" & me.openargs", dbfailonerror)

or for a current event to populate a subform
set subformname.form.recordset=db.openrecordset("SELECT * FROM myTable WHERE FK=" & me.PK", dbfailonerror)
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
I'm not clear on the answer. It's possible I didn't ask the question clearly.

I set the master/child link based on the client ID. When the client's record loads, the client-specific subform data loads automatically. This means that I don't need any extra coding, and I don't need to set the subform's recordsource to null, or set it to only specify the loaded client's records. It seems that the form is handling all that.

My question is if there is a speed advantage to setting the recordset to null and then after the client's data is loaded on the main form, THEN set the recordset to the client's specific data.

Thank you
 

CJ_London

Super Moderator
Staff member
Local time
Today, 02:28
Joined
Feb 19, 2013
Messages
16,610
My question is if there is a speed advantage to setting the recordset to null and then after the client's data is loaded on the main form, THEN set the recordset to the client's specific data.
Assuming the subform data is a large table, the primary speed benefit comes with loading the main form because that is when the subform data will be loaded.

The benefit for the subform on change of record in the main form will be much smaller but should still be there. As with filtering any continuous form, relative benefits will come down to a good db design and proper use of indexing.
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
I'm sorry, I don't understand which approach you are recommending.

Assuming the subform data is a large table,

Yes, it is. Probably ten times as big as the client table.

the primary speed benefit comes with loading the main form because that is when the subform data will be loaded.

I don't understand what you are saying here. Benefit of simply linking or benefit of setting to null and then setting the recordsource after the client's record is loaded?

The benefit for the subform on change of record in the main form will be much smaller but should still be there.

When you say "change of record", that means setting the recordset using code rather than letting Access handle it by linking?

Thank you for clarifying.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 02:28
Joined
Feb 19, 2013
Messages
16,610
Benefit of simply linking or benefit of setting to null and then setting the recordsource after the client's record is loaded?
subforms load before the main form. So if you load a large recordset into a subform for subsequent filtering it will take longer than loading a small recordset. You are also transiting much more data across the network.

When you say "change of record", that means setting the recordset using code rather than letting Access handle it by linking?
whichever - when you change the record in the main form.

Shadow, these are not fixed rules. You design and tune the app on the basis of what it is going to do. Basic principle is a recordset (be it one created in vba, a form, report, combo or listbox recordset) accessing data from anywhere will load quicker with a small amount or zero data compared with a large amount of data.

An app where the user opens a single form and spends the entire day on that form browsing through the records may not benefit from opening it with a zero length recordset - just take the 'initial hit' and it's done. But that same form which is used by users who frequently open the form to a specific record selected from a navigation or other form, do what they need to do and then close it, will benefit in the initial form opening. How much of a benefit depends on the below.

How long a recordset takes to populate is also dependant on how well the underlying tables and relationships for that recordset are designed - this includes consideration for normalisation and indexing, not using domain functions in your queries, incorrect use or design of outer joins, union queries and group by queries and perhaps other factors as well.

But usually the biggest factor is the speed of the connection between the front end and the backend. A local backend is clearly going to be the fastest, one on the network will be slower because it is dependant on the speed of the network and standard of your connection points. Connecting via VPN will be slower still because you have the performance of that connection as well.

So anything you can do to reduce the amount of time spent transiting data across the network will show a benefit (and also free up the network for other users). How much of a benefit will depend on the above factors with biggest benefits being seen on large datasets over a slow network.

Many newbies are not aware of these factors. They start developing in the fastest environment (backend on their machine) and it is only when they go live and put the backend on the network that these performance issues become evident. a not uncommon reaction is to blame access and decide they need to move the backend to sql server. But if it is a straight transfer, little benefit will be seen because the real reasons for slow performance have not been addressed.
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
subforms load before the main form. So if you load a large recordset into a subform for subsequent filtering it will take longer than loading a small recordset. You are also transiting much more data across the network.

So, the way I'm reading you, the only thing NOT to do would be to leave the subform unqualified - i.e. to base it on the complete table - and then filter it as the user navigates the main form.

So:

- Linking based on Client ID
- Setting the recordset of the subform as being filtered to only represent records of the client on the main form
- Setting the subform to initially null and then programatticaly setting the recordsource as the user navigates

...would all be roughly equivalent in performance.


Shadow, these are not fixed rules. You design and tune the app on the basis of what it is going to do. Basic principle is a recordset (be it one created in vba, a form, report, combo or listbox recordset) accessing data from anywhere will load quicker with a small amount or zero data compared with a large amount of data.

Absolutely! Very important summary of what we've been discussing.

Many newbies are not aware of these factors. They start developing in the fastest environment (backend on their machine) and it is only when they go live and put the backend on the network that these performance issues become evident. a not uncommon reaction is to blame access and decide they need to move the backend to sql server. But if it is a straight transfer, little benefit will be seen because the real reasons for slow performance have not been addressed.

There is definitely a real learning curve to Access and many new users (and long-term users) don't understand the concepts and then blame Access. I am thankful that I have a university degree in Computer Science so I studied indexing and normalization so at very least I made sure to use these principals from the get go. But many people just see Access as a glorified Excel spreadsheet with fancy forms and learn the hard way, if ever.

A point you raised over the weekend. You suggested setting the

DB = CurrentDb when the app opens (I assume in a global variable) and then utilizing that every time you access a recordset. I confess that what I do is declare and set db as currentdb in every module. While I realize that there is way more programming the way I do it, is there a speed or performance advantage by doing it once and leaving it? Is there much overhead every time the declaration is made?

SHADOW
 

CJ_London

Super Moderator
Staff member
Local time
Today, 02:28
Joined
Feb 19, 2013
Messages
16,610
So:

- Linking based on Client ID
- Setting the recordset of the subform as being filtered to only represent records of the client on the main form
- Setting the subform to initially null and then programatticaly setting the recordsource as the user navigates

...would all be roughly equivalent in performance.
I would expect the first two to be the same (linking is effectively the same as filtering in this scenario). The last depends on other factors as already discussed but I would expect to be faster for large datasets and about the same for smaller datasets - subject to the other factors already discussed.

A point you raised over the weekend. You suggested setting the

DB = CurrentDb when the app opens (I assume in a global variable) and then utilizing that every time you access a recordset. I confess that what I do is declare and set db as currentdb in every module. While I realize that there is way more programming the way I do it, is there a speed or performance advantage by doing it once and leaving it? Is there much overhead every time the declaration is made?
It's more to do with memory management, your way you are creating pointers all over the place - and then (hopefully) deleting them. There is debate about what happens when a sub is finished, are all the local variables removed from memory? Not a debate I wish to get involved with but you can google for more thoughts on this - a term to use is 'memory leakage'. Technically there is a performance benefit because you are ensuring you keep a connection open so not incurring the overhead of making it again, but whether it is really noticeable, I wouldn't like to say.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Yesterday, 20:28
Joined
Feb 28, 2001
Messages
27,179
I'll step in with a simple compromise for the question about

Code:
DB = CurrentDB

The way I did this was to have a general module where I kept some common utility routines and variables that were used by all of my forms. Then I put the DB variable, which I had declared as a DAO.Database just to avoid any confusion, in the general module as a public variable.

Then my Opening form's _Load routine initialized the DB variable once. After that, all routines used it. And the debate becomes moot when your app quits (Application.Quit) because that closes all pending connections, which are recorded in one of the hidden MSys tables. Then, when the Windows Image Exit routine executes, it reclaims all private memory that the app had previous claimed or added. In the process, the "Object Re-Use" manager assures that the reclaimed memory will be wiped before any further use. So the App exit takes care of that cleanup step. And putting the DB variable as a public item in a general module avoids the debate about what happens with a locally declared copy of CurrentDB.

While I firmly believe in closing things and erasing memory structures, I also know that there are ways to avoid the problem in the first place. And I fully acknowledge that sometimes you aren't sure whether you need to have a public variable such as I just described. Usually what happens is that you discover that you use the item often enough to finally MOVE it to the general module at your next available update.
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
I guess that the debate that people refer to is whether closing the CurrentDB actually does anything.

I am meticulous to close any recordsets or other objects I open and then set them to nothing, as per what I've read. However, from what I've read closing the Set DB = CurrentDB doesn't actually close the object so there's no reason to do so. I guess this is a matter of opinion and no one knows for sure. Your approach to a global setting cuts down on needing to do this all the time AND bypasses the debate.

Thanks
 

static

Registered User.
Local time
Today, 02:28
Joined
Nov 2, 2015
Messages
823
Unless you've created a new workspace/database currentdb will be a reference to dbengine.workspaces(0).databases(0).
To set something to nothing all references need to be closed but Access still has the object open and the dispose request silently fails.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Yesterday, 20:28
Joined
Feb 28, 2001
Messages
27,179
Ah, but static - the question isn't about disposing of CurrentDB; it is about disposing of the COPY of CurrentDB. The request to close THAT connection (as a second connection) certainly COULD succeed. You are right, however, that the dbengine.workspaces(0).databases(0) that is the "freebie" when you open a database will still be there until the Application.Quit works its magic.

Still, in a sense I agree that if you use Application.Quit to force closure, which leads to a Windows process termination sequence, all internal connections and channels and sockets will be closed. When memory is then released back to the free memory pool, nothing should remain regardless of HOW it came into being. With the single exception of having activated an Application Object that has a life of its own, and therefore needs to be explicitly told to bug off.
 

static

Registered User.
Local time
Today, 02:28
Joined
Nov 2, 2015
Messages
823
You're probably right. I never use global or module level objects so it's not something I worry about. They get killed off as they go out of scope.
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
I have a question about the method that people use (as in my example, or Allen Browne's example) to load the combo box only after a few characters are entered, thus limiting the number of rows rather than loading 20 or 30 thousand rows.

Again, my question is based on not understanding how Access loads data. If you use the Like * operator, does the application not need to pull the entire table across the network to evaluate the user's input against all last names in the client table? Unlike the method of only loading one client record into the form's recordset, which is very fast because of indexing, I am assuming that in the case of loading the rows into the combo box we don't have this optimization as there is no index on the last name.

So just to summarize the question: when you use Like * as a WHERE condition in a query, does Access need to load the entire table across the network in order to load the combo box rows, which would give us little advantage over just loading the entire client table?

Thank you
 

Minty

AWF VIP
Local time
Today, 02:28
Joined
Jul 26, 2013
Messages
10,371
If you use Like "*... Access cannot use the index because of the wildcard at the start of the expression.
However Like "ABC*" would use the index, so restricting the search to after a number of letters followed by the wildcard will (should) improve responsiveness.
 

jdraw

Super Moderator
Staff member
Local time
Yesterday, 21:28
Joined
Jan 23, 2006
Messages
15,379
As Minty said, the leading * (wildcard in leftmost position) would not use an index and would force a physical read (slow -to be avoided where possible).
If you use the typical Find as you type (On Change 1 char at a time) you would be doing a search each time for that string and returning qualifying records to a listbox or where ever you show the result.
If you don't do the search until say or 4 characters were entered, then you would eliminate 3 searches with returns and displays and would still be using an index. This would be a relatively fast approach.
But if you know the Name or a unique identifier, then this would be fastest of course.
 

shadow9449

Registered User.
Local time
Yesterday, 21:28
Joined
Mar 5, 2004
Messages
1,037
Sorry, when I said "Like *" I actually meant "Like AB*".

Can someone please elaborate on how the index is being used?

Is it just less work for the application to parse or is the amount of data crossing the network being limited?

Does the entire table cross the network or not? I'm still confused :)
 

CJ_London

Super Moderator
Staff member
Local time
Today, 02:28
Joined
Feb 19, 2013
Messages
16,610
access does not pull all the data across the network then apply the criteria. The ACE/JET/SQL Server engine does that work before returning the reduced results.

However filters are another matter - they work locally on the dataset that has already been pulled across the network.
 

Users who are viewing this thread

Top Bottom