What prevents Access from using the full power of the PC?

amorosik

Member
Local time
Today, 03:33
Joined
Apr 18, 2020
Messages
514
Sometimes it happens that an Access procedure is busy for a long time
For example, when I export a project of hundreds of forms and reports, using the Vcs procedures, the sequence of operations lasts over ten minutes
I'm talking about code that is executed and databases locally, and therefore everything on board the PC on which you are working, no lan communication involved
By opening the Windows task manager you see that the commitment of the various CPU cores, the disk and network usage remain at very low levels, in my case the CPU is at 10%, the disk at 1 or 2%, note that none of the individual cores is at 100% but some of them show a commitment
In my case the CPU is an Amd Ryzen and the disk a Samsung SSD
The question is: why does the CPU, at least for a single core, not go to 100%?

1734505278583.png
 
The main reason for this will probably be that VBA code is executed single-threaded, i.e. always with only one core, regardless of how many cores the processor has.
The single-threaded performance of a CPU is therefore more important for Microsoft Access than the multi-threaded performance.

There may be other factors, e.g. when executing queries. Several threads may be used there, but I am unsure about this.

Your CPU has 8 cores, if I can see that correctly. 100% / 8 = 12,5%. That's about right if one core of the processor is fully utilized.

Edit: It looks like the CPU supports Hyper-Threading and therefore has 16 virtual cores.

It is of course possible that the CPU distributes the one thread to several cores (alternately). Possibly for thermal reasons?
 
Last edited:
The main reason for this will probably be that VBA code is executed single-threaded, i.e. always with only one core, regardless of how many cores the processor has.
The single-threaded performance of a CPU is therefore more important for Microsoft Access than the multi-threaded performance.

There may be other factors, e.g. when executing queries. Several threads may be used there, but I am unsure about this.

Your CPU has 8 cores, if I can see that correctly. 100% / 8 = 12,5%. That's about right if one core of the processor is fully utilized.

Edit: It looks like the CPU supports Hyper-Threading and therefore has 16 virtual cores.

It is of course possible that the CPU distributes the one thread to several cores (alternately). Possibly for thermal reasons?

Your hypothesis seems incorrect to me
If the Access code is executed only mono-core then why do we see several active and working cores?
What we see in the graph is only the activity of Microsoft Access because before starting I checked if everything was at zero or almost and the CPU commitment was within 2-3%
And also, why does at least one of the visible cores not remain stuck at 100%?
Finally, suppose that the operating system or the CPU are able to distribute a single thread on several cores, why do we not see all the cores going at 100%?
 
It is certainly up to the operating system (perhaps even the CPU) to decide which instructions of the (single) VBA thread are given to which core for processing and when.
And if only one of the cores executes an instruction (from VBA) at a time, the others get bored in the meantime.
 
Your hypothesis seems incorrect to me
If the Access code is executed only mono-core then why do we see several active and working cores?
What we see in the graph is only the activity of Microsoft Access because before starting I checked if everything was at zero or almost and the CPU commitment was within 2-3%
And also, why does at least one of the visible cores not remain stuck at 100%?
Finally, suppose that the operating system or the CPU are able to distribute a single thread on several cores, why do we not see all the cores going at 100%?

I'm going to answer your question, but it is not going to be a short answer.

You showed Windows Task Manager images from the "Performance" tab - but that is only a part of the story. Switch to "Processes" to see things that, at least in potential, COULD be running. A "process" is the name applied to the logical construct used by Windows internally to manage your activities. A process owns resources such as allocated memory, file handles, ... and sometimes a CPU resource.

The Windows Process manager has a list of things to do (see Processes list). Every so often, Process Manager decides it is time to reschedule based on a fairness counter that is essentially a timer. Each process gets a maximum of X time units in a continuous run-burst. This time burst is a tunable number (usually of clock ticks) that reflects your system's needs. This internal system timer, when it counts down to 0, triggers a "process reschedule" event (not visible to Access... wrong class of event.) This is a hold-over from one-lung computer days - one CPU, one thread, everything runs via time-slicing. If you didn't time-slice on the really old computers, they would never be able to manage multi-tasking.

The process list structure includes a queue of every process that is in a "Needs CPU" state. MOST of those processes in the process list are in a voluntary wait state of one kind or another. For instance, waiting for I/O completion, waiting for a service request to come in, waiting to be swapped into physical memory (if you have small RAM and a big "virtual memory" file). The "waiting for CPU" state says you have everything you need except the CPU resource.

When the "time to reschedule" occurs, the process currently holding a CPU resource loses it and that process gets moved to the end of the "Needs CPU" queue. The next "needs CPU" process (that was waiting at the head of the queue) now gets the CPU and has ITS shot at doing something. This is called "round-robin" scheduling. The queue also has priorities, which can be assigned so that high-priority processes get the CPU more often than low-priority processes.

In each case when you have more than one CPU, the scheduler assigns CPUs to each "need CPU" process until there are no more CPUs to assign. Therefore, the answer of "why you see multiple active cores but no saturated cores" is that the order of a process in the "need CPU" queue determines the most likely CPU it will get, and the scheduler actually tries to distribute processes evenly (as a heat load issue). Even in the hypothetical "single process" case, you would bounce from one CPU to the next for as many CPUs as you have. That is why you see peaks and valleys in the CPU saturation graphs. The width of those peaks is proportional to the length of that "process reschedule" timer.

Processes bounce among various states such as "I/O wait" or "suspended" or "Need memory" or "Need CPU" or "Executing" - but ONLY the "Executing" state can "peg" the CPU at 100%. The other states may cause the briefest of blips to do whatever is required to implement the state change. However, until processes reach the "Executing" phase, they have no sustained activity.

When you run MSACCESS.EXE, you run a process that contains the Access user interface (GUI). Each form or report that is open is a child process that can have (semi-)private resources like memory. But there is a hidden process that you rarely see that is the SQL engine. In order to have a working display that gets refreshed or repainted while a query is still running, the SQL engine has its own separate process. VBA is single-threaded but so is SQL. And in theory, if you have a form or report with a non-blank .RecordSource that points to a table or query, your GUI process sends a request to your SQL process - at which point the GUI process goes into "I/O wait" while the SQL process goes into "Executing." Then SQL returns data to the GUI, which starts execution again. Therefore, you would never see true saturation because the two processes, GUI and SQL, are tag-teaming each other.

There are moments when you might have enough going on that your number of available CPU cores is less than the number of eligible processes. When that happens, you WOULD see total performance reaching 100% - briefly. But the most common states for a process are "I/O wait" for code running in user mode, or "Suspended" for service tasks that might run in elevated priority modes. What you normally see is the load on the CPU that occurs UNTIL your process reaches an I/O wait state. But that is the most common state and the most important state for a process, since a process that does no input OR output is also pretty darned useless. That "I/O wait" state is why nothing stays at 100% for long.
 
@amorosik I don't know if this has anything to do with code being slow but have you compacted and compiled the code recently? Fragmentation may affect the process speed. I have code to export all objects to text in most of my applications. It runs when the login form closes (but only if I am the person running the app) to determine the last time a back up was made vs the date of the last changed object. Even for fairly large applications, the code doesn't take more than a minute to run. It exports all objects to text and creates a full copy also.
 
@amorosik I don't know if this has anything to do with code being slow but have you compacted and compiled the code recently? Fragmentation may affect the process speed. I have code to export all objects to text in most of my applications. It runs when the login form closes (but only if I am the person running the app) to determine the last time a back up was made vs the date of the last changed object. Even for fairly large applications, the code doesn't take more than a minute to run. It exports all objects to text and creates a full copy also.

Yes, compacted and compiled
 
@amorosik I don't know if this has anything to do with code being slow but have you compacted and compiled the code recently? Fragmentation may affect the process speed. I have code to export all objects to text in most of my applications. It runs when the login form closes (but only if I am the person running the app) to determine the last time a back up was made vs the date of the last changed object. Even for fairly large applications, the code doesn't take more than a minute to run. It exports all objects to text and creates a full copy also.

It takes about a minute for a project of how many forms/reports/modules?
 
Here's the biggest one I have currently. I have larger ones but they are archived.
Code:
TypeDesc    CountOfName
Access Object - Containers    9
Access Object - Database    1
Database Document    2
Form    80
Macro    5
Module    26
Queries    445
Relationships    2
Reports    66
Table - Linked Access Tables    16
Table - Local Access Tables    66
Users    1

The table names are swapped. linked = 66, local = 16
 
When I wrote ".. over ten minutes.." I meant for a project with:
- 650 forms (the largest one 1,700 Mbyte, another ten over 400 Kbyte)
- 110 modules (the largest one 900 Kbyte, others 400 Kbyte..)
- 120 queries
- 300 reports
- 512 linked data tables
- 2 internal data tables

So I'd say we're roughly in line with the times
 
DoEvents is used in msaccess-vcs to display progress indicators that intentionally generate short interruptions during code execution.

It should also be noted that not only pure VBA is running. Methods from the Access objects are also used. I could well imagine that these could use several threads in the internal process. Why, for example, should the instructions via the DBEngine run in just one thread, just because VBA does not provide for this?
However, only the developers at Microsoft can probably tell us how this really works. ;)
 
Last edited:
However, only the developers at Microsoft can probably tell us how this really works. ;)
To some extent you can use the SysInternals process monitoring tools to see that Access uses multiple threads to run its internal stuff, particularly DB queries.
 
To some extent you can use the SysInternals process monitoring tools to see that Access uses multiple threads to run its internal stuff, particularly DB queries.

However, in another thread we explored the fact that those multiple threads are parallel only when part of a background cleanup after a given query is finished. The source of that information was a web article focusing on multi-thread and single-thread questions in database context. For any single query, that SQL (JET/ACE threads only) is single-threaded. For SQL Server and related EXTERNAL DB back ends, it is harder to say what happens behind the scenes.

You can look for "Access ACE engine multi-threaded" to see MANY reports that it crashes when you try to force it to multi-thread. I did that search and got a page full of references to that effect.
 

Users who are viewing this thread

Back
Top Bottom