How to make a large project? (3 Viewers)

....giving you another 32 files and another 2 GB of task virtual space, but there is this little "gotcha" called "process independence" that would prevent the two tasks from easily communicating with each othe...

The two (or three or more) tasks do not have to communicate with each other, they communicate with the db server
Is there a maximum limit of 2Gbyte of usable memory also for the 64bit versions of Access?
Do you know where I can find documentation that describes these things?
 
Is there a maximum limit of 2Gbyte of usable memory also for the 64bit versions of Access?
the 2Gb refers to file size, not memory usage. Since your data is going to be in sql server, that parameter is irrelevant. What will matter is how much data you 'bring across' from sql server to sit in memory.

A relatively modern low end computer typically has 8Gb of memory, and a high end 32gb or more. Of this, Windows and associated apps take around 4Gb, leaving 4Gb (for low end) for other apps - such as outlook, excel, access etc. If you (or your users) are using a virtual drive with a windows OS then each of these will take around 4gb - so a low end computer will struggle for memory with a virtual drive. Note the more memory you have, the more Windows may take to optimise its own performance - so with 32gb memory, Windows may take 10-12Gb

Easy enough for you to monitor - see the performance tab in the task manager. Open the apps you would normally have open (or you expect your users to) to get a baseline - then open an access app and see how the usage changes. To see how memory is affected, do things in access such as connect to sql server, open a form to display a few records, another with lots of records, whatever floats your boat and meets the potential needs of your project.
 
I have 64Gb of memory in my Laptop.
As I write this with an RDP session open, 10 + tabs on DuckDuckGo browser, A PDF open in Chrome, Excel, Outlook, WhatsApp, SSMS, Sonos Desktop App, Teams, Notepad++, Toggle(time recording app) and a fairly chunky Access application open on a complicated form with lots of sub forms, my memory usage seems firmly pegged down at 20.5Gb or about 33%.

Windows definitely does grab a percentage of what's available. SQL Server if you decide to run it on a local machine is a real resource hog.
 
I understand what you mean, and I sincerely appreciate the time spent on the explanation above
Which however does not affect the problem posed in the slightest

So, let's say I want to start a project (I'm talking about forms/reports/code, the data is on a classic db server), which allows the overcoming of the technical limits linked to each single Access file
And so if the maximum limit of forms that can be contained in a single Access file is 1000 then my project will have to include at least 1001 forms, same thing for reports, same exact thing for code

I ask this because once the optimal structure of the project has been found (and I mean the way to update the procedures, the way to reconnect the data tables, the way to control user access to the various objects, etc...) so that it allows to overcome the limit of objects of a single Access file then the thing is done
If this thing has never been done, and is therefore a novelty, it seems to me an excellent reason to find a solution, not a reason to to desist
Seriously, I don't see how you can need over 1000 forms.
 
Is there a maximum limit of 2Gbyte of usable memory also for the 64bit versions of Access?

Unfortunately, yes.

I'll try to explain. Windows was developed at a time when people talked about "memory models" - having to do with how the hardware memory management system worked. Different versions of memory management hardware provided different amounts of address space. These models started with the "16 bits is all you've got" model based on 16-bit addressing of the Intel 8008 chip. Those machines could NEVER exceed 64K of RAM. But over time, models progressed.

Office 32-bit versions grew up with the "large memory model" that says that you have two major divisions of any Office task memory. Using the then-current 32-bit model, they took the 32-bit address pointers and sub-divided them. The two divisions were then two semi-independent "address spaces" in which Access, all of its libraries, and anything else you had, all would get placed into one or the other of the two spaces.

In this model, "low" memory (0h00000000 - 0h7FFFFFF) was for MSACCESS.EXE itself plus any library references, the program stack & program heap (two large working areas) and a few other elements. Library references could go anywhere within the low space because they are called "dynamic link libraries" in which all addressing is relative to where it is located. That is, if you know about the hardware registers of your machine, the Program Counter (which is a register that keeps track of the next instruction to be executed) can be used as an offset for addresses so that you could drop it anywhere in memory and just have a pointer to that location to find the entry points in that library. The Stack Pointer register was also used for calls that established a stack "frame" - basically, the data structure that temporarily holds all of the local variables of any sub/function.

The low memory would be where you had all of your library routines that arrived through something OTHER than your VBA code modules. One of the overhead items in that structure is in the bottom of low memory because in that model, the memory management data (identifying the various distinct memory segments in the various distinct program elements) occupied the low 1 MB. Address 0 was NEVER memory mapped (which is how Windows tasks catch undefined addresses - with a memory management exception). Since it is ALL virtually addressed, there can be gaps between modules but the gaps don't waste physical space.

The "high" memory is actually easier. Whatever is in the .ACCDB (or, originally, .MDB) files goes to high memory. Got multiple files with multiple tables? Doesn't matter. The memory management can be dynamically "diddled" so that if you have to address something in file A, the correct pointers can be loaded for it even if the internal addresses would conflict with file B. However, because of these individual files all being developed originally as single files at the time of their development, they are ALL based on the starting address of 0h80000000. So to address the contents of one App file or one BE file, you must load the appropriate memory management pointers when you load the actual address. Now, if you think about it, this is why you cannot have a relationship between tables in different files - only one of the tables can be mapped at a time and for a relationship you need to map TWO tables.

Anyway, because these were 32-bit addresses, their total address space was 2^32 bytes or 4Gb. The subdivision into high and low memory meant that the core code and libraries and memory mapping overhead plus the stack and heap had no more than 2 GB, and whatever you had from DB files had the other 2 GB.

Now come forward in time to the introduction of 64-bit Office. For reasons known only to the MS board of directors, the internal structure of Access was NOT fully upgraded in the same way that Word and Excel were, because Access still is limited to that "large memory model" architecture. It did not get expanded to the "very large" model. It still contains the high/low division to 2 GB each. Even though there have been a FEW improvements including the ability to call routines that use 64-bit addressing (see also PtrSafe discussions), the core of an Access DB file is still firmly rooted in 32-bit memory models and thus is still constrained regarding total size.

Since that "high" segment also encompasses ALL of the .MDB / .ACCDB files and does not overlap into "low" memory, that means you still have no more than 31 bits (2 GB) of address space for a single DB file... but you ALSO have only 2 GB worth of address space for the running application no matter which files it happens to be mapping at the moment. Yes, it CAN address a lot more than 2 GB - but it only can do 2 GB at a time, and no table can cross one of those implied segment boundaries.

@amorosik - I was not attempting to demean you with the joke about the horse, I was attempting to inject levity into what had become a highly contentious thread. But if you note the responses you were starting to get, you will realize that our frustration begins to emerge as we continue to offer our best answers and you continue to reject them.

If your question continues to evoke this kind of response, then from what I am seeing, our responding members can only see two possibilities. First is that you have some agenda to intentionally provoke us. Second is that you have not been able to correctly frame your real question and thus have left us frustrated in trying to understand what you are really asking. That is, we have a "logical communications disconnect" somewhere.
 
It seems to me an excellent reason to find a solution, not a reason to to desist
Imagine a company that wants a single application to do everything - stock management, production control, invoicing, sales, purchase and nominal ledgers HR, payroll, financial management, forecasting, legal, sales and marketing, maintenance, corporate structure, R&D etc and lets add in they also want manage their web site and email as well. One system. Is that what you are imagining?

If so, aside from the fact there are other apps out there that can do certain aspects better than access, do you really want a maintenance person to view HR stuff (they may need to know some information about their subordinates or others within the organisation such as name and position) or sales invoices?

So you build separate apps with designed and limited links to relevant data in other databases as required (sql server should be set up with multiple databases based on function). If you really want the maintenance person to also manage HR you would either provide a button to open the HR app, or add the HR app as a library.

All this can be controlled from one 'master' app which basically only contains a menu for users to select the functionality they require - what the user sees as menu options being controlled by their login profile.

In any event I don't see a problem that requires a solution.

Suggest you describe the scenario you do envisage, What will the envisaged app actually do.
 
Maybe I'm asking for a technique/system that is normally never used with Access
I came to Access after 40 years developing mainframe applications for large companies all over the country and even in the Middle East. No project is ever indefinite. They have fixed objectives at the beginning. Sometimes during design or development we discover a flaw or omission in the initial plan. Then we rework what we have completed and start on the new stuff. Occasionally, projects get cancelled because they were so poorly conceived that they failed to be implemented in a timely fashion.

I have developed a number of mission critical systems using Access. My clients use Access to run their entire business with the exception of their accounting and payroll which are usually purchased in the case of a general ledger or outsourced in the case of payroll. But, my apps do what the company needs in order to make/sell their product.

I would never create a database that includes all the functionality for every department in a single FE. The database may be shared, but the FE's are different for each application. The users don't find this odd or inconvenient. They would have the same situation if they were working with mainframe applications.

You're going to have to tell us what this mystical ever growing application is because it simply isn't something any client would pay me to develop because it has no objective and no end point for my employment.
 
I'm trying to figure out how to evolve projects that are approaching the technical limits of a single Access file
You needed to say that at the beginning. That is a different situation than your initial question.

Usually projects like this were poorly designed to begin with and what you need to do is to get a tool like Total Access Analyzer from FMSInc.com or go to @isladogs site and look at his tool which is similar but won't have the same exact features. You might even want to purchase both. Crystal has a similar tool which I think is free.

Then you need to look for objects that duplicate other objects. You will probably find clusters of queries that have hard-coded criteria which could be reduced to a single query with arguments. You will find multiple forms that update the same data. You will find reports that are identical except for their sort sequence which could be supplied as an argument.

You may find that the database is used by several different departments to do different things. You can leave the BE as a single database if some of the tables are shared but you can split off multiple FE's that are single purpose.
 
Unfortunately, yes.

I'll try to explain. Windows was developed at a time when people talked about "memory models" - having to do with how the hardware memory management system worked. Different versions of memory management hardware provided different amounts of address space. These models started with the "16 bits is all you've got" model based on 16-bit addressing of the Intel 8008 chip. Those machines could NEVER exceed 64K of RAM. But over time, models progressed.

Office 32-bit versions grew up with the "large memory model" that says that you have two major divisions of any Office task memory. Using the then-current 32-bit model, they took the 32-bit address pointers and sub-divided them. The two divisions were then two semi-independent "address spaces" in which Access, all of its libraries, and anything else you had, all would get placed into one or the other of the two spaces.

In this model, "low" memory (0h00000000 - 0h7FFFFFF) was for MSACCESS.EXE itself plus any library references, the program stack & program heap (two large working areas) and a few other elements. Library references could go anywhere within the low space because they are called "dynamic link libraries" in which all addressing is relative to where it is located. That is, if you know about the hardware registers of your machine, the Program Counter (which is a register that keeps track of the next instruction to be executed) can be used as an offset for addresses so that you could drop it anywhere in memory and just have a pointer to that location to find the entry points in that library. The Stack Pointer register was also used for calls that established a stack "frame" - basically, the data structure that temporarily holds all of the local variables of any sub/function.

The low memory would be where you had all of your library routines that arrived through something OTHER than your VBA code modules. One of the overhead items in that structure is in the bottom of low memory because in that model, the memory management data (identifying the various distinct memory segments in the various distinct program elements) occupied the low 1 MB. Address 0 was NEVER memory mapped (which is how Windows tasks catch undefined addresses - with a memory management exception). Since it is ALL virtually addressed, there can be gaps between modules but the gaps don't waste physical space.

The "high" memory is actually easier. Whatever is in the .ACCDB (or, originally, .MDB) files goes to high memory. Got multiple files with multiple tables? Doesn't matter. The memory management can be dynamically "diddled" so that if you have to address something in file A, the correct pointers can be loaded for it even if the internal addresses would conflict with file B. However, because of these individual files all being developed originally as single files at the time of their development, they are ALL based on the starting address of 0h80000000. So to address the contents of one App file or one BE file, you must load the appropriate memory management pointers when you load the actual address. Now, if you think about it, this is why you cannot have a relationship between tables in different files - only one of the tables can be mapped at a time and for a relationship you need to map TWO tables.

Anyway, because these were 32-bit addresses, their total address space was 2^32 bytes or 4Gb. The subdivision into high and low memory meant that the core code and libraries and memory mapping overhead plus the stack and heap had no more than 2 GB, and whatever you had from DB files had the other 2 GB.

Now come forward in time to the introduction of 64-bit Office. For reasons known only to the MS board of directors, the internal structure of Access was NOT fully upgraded in the same way that Word and Excel were, because Access still is limited to that "large memory model" architecture. It did not get expanded to the "very large" model. It still contains the high/low division to 2 GB each. Even though there have been a FEW improvements including the ability to call routines that use 64-bit addressing (see also PtrSafe discussions), the core of an Access DB file is still firmly rooted in 32-bit memory models and thus is still constrained regarding total size.

Since that "high" segment also encompasses ALL of the .MDB / .ACCDB files and does not overlap into "low" memory, that means you still have no more than 31 bits (2 GB) of address space for a single DB file... but you ALSO have only 2 GB worth of address space for the running application no matter which files it happens to be mapping at the moment. Yes, it CAN address a lot more than 2 GB - but it only can do 2 GB at a time, and no table can cross one of those implied segment boundaries.

@amorosik - I was not attempting to demean you with the joke about the horse, I was attempting to inject levity into what had become a highly contentious thread. But if you note the responses you were starting to get, you will realize that our frustration begins to emerge as we continue to offer our best answers and you continue to reject them.

If your question continues to evoke this kind of response, then from what I am seeing, our responding members can only see two possibilities. First is that you have some agenda to intentionally provoke us. Second is that you have not been able to correctly frame your real question and thus have left us frustrated in trying to understand what you are really asking. That is, we have a "logical communications disconnect" somewhere.

Thanks for sharing your valuable experience and knowledge
Looking for information on the topic of memory management, I see that several experts (Karl, Mike, Daniel,...) seem to assert that Access 64bit is not affected by the problem of 2-4 Gbyte of memory to contain applications and data

Is there official Microsoft documentation that deals with these issues?
 
Imagine a company that wants a single application to do everything - stock management, production control, invoicing, sales, purchase and nominal ledgers HR, payroll, financial management, forecasting, legal, sales and marketing, maintenance, corporate structure, R&D etc and lets add in they also want manage their web site and email as well. One system. Is that what you are imagining?
Knowing the features that the project will have is of no importance in finding a technical solution that allows the user to use a system with a number of objects greater than those allowed by the development environment.


So you build separate apps with designed and limited links to relevant data in other databases as required (sql server should be set up with multiple databases based on function). If you really want the maintenance person to also manage HR you would either provide a button to open the HR app, or add the HR app as a library.
But what does the maintenance person who touches the accounting have to do with it now?


All this can be controlled from one 'master' app which basically only contains a menu for users to select the functionality they require - what the user sees as menu options being controlled by their login profile.
Yes, a master app that starts other subordinate apps could be a solution to the problem
It remains to be seen whether the memory usage problem described by @The_Doc_Man could be a real obstacle to this type of operation
 
I came to Access after 40 years developing mainframe applications for large companies all over the country and even in the Middle East. No project is ever indefinite. They have fixed objectives at the beginning. Sometimes during design or development we discover a flaw or omission in the initial plan. Then we rework what we have completed and start on the new stuff. Occasionally, projects get cancelled because they were so poorly conceived that they failed to be implemented in a timely fashion.

I have developed a number of mission critical systems using Access. My clients use Access to run their entire business with the exception of their accounting and payroll which are usually purchased in the case of a general ledger or outsourced in the case of payroll. But, my apps do what the company needs in order to make/sell their product.

I would never create a database that includes all the functionality for every department in a single FE. The database may be shared, but the FE's are different for each application. The users don't find this odd or inconvenient. They would have the same situation if they were working with mainframe applications.

You're going to have to tell us what this mystical ever growing application is because it simply isn't something any client would pay me to develop because it has no objective and no end point for my employment.

Remembering all the work you've done isn't much help in solving this specific problem
And even needing to understand what a software procedure actually does, from the developer's point of view, is completely useless
Or if you like to think of a specific system, then think of a system that can be used to create mailing campaigns, that allows you to create estimates, manage orders to suppliers, sell with documents of different types, that can be connected to POS and electronic cash registers, that integrates an accounting system, that allows you to power an e-commerce, .....
But, as I wrote above, dwelling on the details of the system does not provide any help with regards to a possible solution to the problem posed
 
Remembering all the work you've done isn't much help in solving this specific problem
It is because you are saying you have to refactor existing applications. I've done that. I offered you concrete advice. You are correct, what the app does is irrelevant but to fix it, you need to understand it. Otherwise, you can never make order out of chaos.

You are simply refusing to listen. You have to understand what you have so you start by documenting it. Then you get rid of duplicate stuff and in the end, you may end up creating multiple FE's to isolate unrelated functionality.
 
You needed to say that at the beginning. That is a different situation than your initial question.

Usually projects like this were poorly designed to begin with and what you need to do is to get a tool like Total Access Analyzer from FMSInc.com or go to @isladogs site and look at his tool which is similar but won't have the same exact features. You might even want to purchase both. Crystal has a similar tool which I think is free.

Then you need to look for objects that duplicate other objects. You will probably find clusters of queries that have hard-coded criteria which could be reduced to a single query with arguments. You will find multiple forms that update the same data. You will find reports that are identical except for their sort sequence which could be supplied as an argument.

You may find that the database is used by several different departments to do different things. You can leave the BE as a single database if some of the tables are shared but you can split off multiple FE's that are single purpose.

This is the initial post

"...The question is: working on a project that is expected to grow continuously at the rate of a few dozen objects/month, how do you set things up from the beginning to allow this growth without preventing the correct overall functioning?..."

Nothing has changed at all
Then, if you associate the word 'project' with the word 'single file' then it means that you misunderstood
 
You provided links for articles discussing "Large Address Aware" - the option to allow programs to re-allocate their memory maps. I followed all three links. The solutions shown in those articles move the barrier from 2 GB to either 3 GB or 4 GB depending on the specific structure of the app. They AT MOST double the size of that address space, so might provide for a little more room. But that is a one-time expansion. Once you implement the LAA option, there is nowhere else to go. The internal addresses for data tables apparently did not change when the LAA options were implemented.

You don't seem to understand that your question asks us how to push a finite memory to essentially infinite limits. Don't you see the inherent contradictions in that question? What do you do when you reach the edge of a cliff and take one more step? You FALL DOWN.

As long as you continue to ask this question in the abstract, you are going to be told in the abstract, "You can only go so far and no farther." If you had some specifics - which in your cases it seems you don't - then we could work on a framework of those specifics to help you optimize the layout and maximize your storage efficiency. Dealing in abstraction - as your question appears to be doing - is ALWAYS going to lead to generalized (and probably uniformly negative) answers.

seem to assert that Access 64bit is not affected by the problem of 2-4 Gbyte of memory to contain applications and data

No, when I read the articles, I did not see anything like that. If you think they said that, then I respectfully suggest you saw what you WANTED to see rather than what was really there. The barriers I discussed were moved down the road a little - but the are still there.

Is there official Microsoft documentation that deals with these issues?

Yes - it is the Microsoft Access Specifications document. It tells you how far you can go with something Access-related. I've been referencing it for quite a while. Did you think I was pulling those limits out of thin air?
 
...The question is: working on a project that is expected to grow continuously at the rate of a few dozen objects/month, how do you set things up from the beginning to allow this growth without preventing the correct overall functioning?...

You don't. The "without preventing correct functioning" part hides a tricky "gotcha" called "change orders." What you are asking is, in essence, a two-pass (or higher) algorithm. To prevent roadblocks you must be able to see the future. You are asking me to violate time symmetry. So... no, you can't do this. Reality ALWAYS intrudes on projects. (See also the third law of thermodynamics, having to do with open vs. closed systems.)
 
This is the initial post

"...The question is: working on a project that is expected to grow continuously at the rate of a few dozen objects/month, how do you set things up from the beginning to allow this growth without preventing the correct overall functioning?..."

Nothing has changed at all
Then, if you associate the word 'project' with the word 'single file' then it means that you misunderstood

And the real question is .. why would you have a project that is never finished? Why would it need to grow continuously? Why can't you design a form or a query that works correctly in both 2024, and also runs in 2025 without needing to add a new form for 2025. I assume that's the sort of problem you are dealing with.

You can, it's just understanding how to build the database with that functionality. Maybe you need a table with 12 monthly slots for the last 12 months, fill those slots with the data you need, and then you have a mechanism to generate a 12 monthly report without adding new code each month, and without you having to spend time every month redesigning the database.
 
You don't. The "without preventing correct functioning" part hides a tricky "gotcha" called "change orders." What you are asking is, in essence, a two-pass (or higher) algorithm. To prevent roadblocks you must be able to see the future. You are asking me to violate time symmetry. So... no, you can't do this. Reality ALWAYS intrudes on projects. (See also the third law of thermodynamics, having to do with open vs. closed systems.)
And the real question is .. why would you have a project that is never finished? Why would it need to grow continuously? Why can't you design a form or a query that works correctly in both 2024, and also runs in 2025 without needing to add a new form for 2025. I assume that's the sort of problem you are dealing with.

You can, it's just understanding how to build the database with that functionality. Maybe you need a table with 12 monthly slots for the last 12 months, fill those slots with the data you need, and then you have a mechanism to generate a 12 monthly report without adding new code each month, and without you having to spend time every month redesigning the database

Yes, if we were in a group that discusses philosophy then we could write about why
But we are in a technical group, and so I think it is enough to explain the how/what
 
You provided links for articles discussing "Large Address Aware" - the option to allow programs to re-allocate their memory maps. I followed all three links. The solutions shown in those articles move the barrier from 2 GB to either 3 GB or 4 GB depending on the specific structure of the app. They AT MOST double the size of that address space, so might provide for a little more room. But that is a one-time expansion. Once you implement the LAA option, there is nowhere else to go. The internal addresses for data tables apparently did not change when the LAA options were implemented.

You don't seem to understand that your question asks us how to push a finite memory to essentially infinite limits. Don't you see the inherent contradictions in that question? What do you do when you reach the edge of a cliff and take one more step? You FALL DOWN.

As long as you continue to ask this question in the abstract, you are going to be told in the abstract, "You can only go so far and no farther." If you had some specifics - which in your cases it seems you don't - then we could work on a framework of those specifics to help you optimize the layout and maximize your storage efficiency. Dealing in abstraction - as your question appears to be doing - is ALWAYS going to lead to generalized (and probably uniformly negative) answers.



No, when I read the articles, I did not see anything like that. If you think they said that, then I respectfully suggest you saw what you WANTED to see rather than what was really there. The barriers I discussed were moved down the road a little - but the are still there.



Yes - it is the Microsoft Access Specifications document. It tells you how far you can go with something Access-related. I've been referencing it for quite a while. Did you think I was pulling those limits out of thin air?

I have provided those links for the sole purpose of making it clear that other experts on these topics expressly indicate that the 64bit version of Access/Office does not suffer from the memory shortage problem, but only the 32bit version does
And this contrasts with the answer you gave me in post66
 

Users who are viewing this thread

  • Back
    Top Bottom