Your hypothesis seems incorrect to me
If the Access code is executed only mono-core then why do we see several active and working cores?
What we see in the graph is only the activity of Microsoft Access because before starting I checked if everything was at zero or almost and the CPU commitment was within 2-3%
And also, why does at least one of the visible cores not remain stuck at 100%?
Finally, suppose that the operating system or the CPU are able to distribute a single thread on several cores, why do we not see all the cores going at 100%?
I'm going to answer your question, but it is not going to be a short answer.
You showed Windows Task Manager images from the "Performance" tab - but that is only a part of the story. Switch to "Processes" to see things that, at least in potential, COULD be running. A "process" is the name applied to the logical construct used by Windows internally to manage your activities. A process owns resources such as allocated memory, file handles, ... and sometimes a CPU resource.
The Windows Process manager has a list of things to do (see Processes list). Every so often, Process Manager decides it is time to reschedule based on a fairness counter that is essentially a timer. Each process gets a maximum of X time units in a continuous run-burst. This time burst is a tunable number (usually of clock ticks) that reflects your system's needs. This internal system timer, when it counts down to 0, triggers a "process reschedule" event (not visible to Access... wrong class of event.) This is a hold-over from one-lung computer days - one CPU, one thread, everything runs via time-slicing. If you didn't time-slice on the really old computers, they would never be able to manage multi-tasking.
The process list structure includes a queue of every process that is in a "Needs CPU" state. MOST of those processes in the process list are in a voluntary wait state of one kind or another. For instance, waiting for I/O completion, waiting for a service request to come in, waiting to be swapped into physical memory (if you have small RAM and a big "virtual memory" file). The "waiting for CPU" state says you have everything you need except the CPU resource.
When the "time to reschedule" occurs, the process currently holding a CPU resource loses it and that process gets moved to the end of the "Needs CPU" queue. The next "needs CPU" process (that was waiting at the head of the queue) now gets the CPU and has ITS shot at doing something. This is called "round-robin" scheduling. The queue also has priorities, which can be assigned so that high-priority processes get the CPU more often than low-priority processes.
In each case when you have more than one CPU, the scheduler assigns CPUs to each "need CPU" process until there are no more CPUs to assign. Therefore, the answer of "why you see multiple active cores but no saturated cores" is that the order of a process in the "need CPU" queue determines the most likely CPU it will get, and the scheduler actually tries to distribute processes evenly (as a heat load issue). Even in the hypothetical "single process" case, you would bounce from one CPU to the next for as many CPUs as you have. That is why you see peaks and valleys in the CPU saturation graphs. The width of those peaks is proportional to the length of that "process reschedule" timer.
Processes bounce among various states such as "I/O wait" or "suspended" or "Need memory" or "Need CPU" or "Executing" - but ONLY the "Executing" state can "peg" the CPU at 100%. The other states may cause the briefest of blips to do whatever is required to implement the state change. However, until processes reach the "Executing" phase, they have no sustained activity.
When you run MSACCESS.EXE, you run a process that contains the Access user interface (GUI). Each form or report that is open is a child process that can have (semi-)private resources like memory. But there is a hidden process that you rarely see that is the SQL engine. In order to have a working display that gets refreshed or repainted while a query is still running, the SQL engine has its own separate process. VBA is single-threaded but so is SQL. And in theory, if you have a form or report with a non-blank .RecordSource that points to a table or query, your GUI process sends a request to your SQL process - at which point the GUI process goes into "I/O wait" while the SQL process goes into "Executing." Then SQL returns data to the GUI, which starts execution again. Therefore, you would never see true saturation because the two processes, GUI and SQL, are tag-teaming each other.
There are moments when you might have enough going on that your number of available CPU cores is less than the number of eligible processes. When that happens, you WOULD see total performance reaching 100% - briefly. But the most common states for a process are "I/O wait" for code running in user mode, or "Suspended" for service tasks that might run in elevated priority modes. What you normally see is the load on the CPU that occurs UNTIL your process reaches an I/O wait state. But that is the most common state and the most important state for a process, since a process that does no input OR output is also pretty darned useless. That "I/O wait" state is why nothing stays at 100% for long.