C & R

pdanes

Registered User.
Local time
Today, 06:37
Joined
Apr 12, 2011
Messages
197
Is there any way to detect when a database was compacted on its previous close?
 
The method used by C&R creates a new database (and copies elements from old to new) so the creation date on the newest version might be a hint. The update date will be changed every time you open it, but I think creation date is not affected by simple open/close.

Please be aware that there are issues if that is an automated C&R. They have been known to fail and leave the DB in a really bad state.
 
Is there any way to detect when a database was compacted on its previous close?
No. What problem are you trying to solve?

If your db suffers from bloat, it is better to fix the problem. As Doc mentioned, compact on close isn't a good idea. I never do it for my FE or for my BE either. Once you distribute the FE to the users, you should not need compact on close because unless you have created a bloating issue, the db won't grow. I also, always replace the FE each time it is opened so mine certainly don't bloat. We can help you to fix a bloating issue if you have one.
 
No bloat, no automated C&R. I have one temp table in which I reset the autonumber field. I never ran into it during my testing, and the user never encountered it in the 16+ years he's been using the app, until yesterday. He was pounding furiously on one specific function that used that code, and suddenly got an error about too many fields defined. I did a little reading, discovered what it was and told him to do a C & R. That fixed it - turns out Access does something goofy internally like marking the old autonumber field as inactive and creating a new one. Naturally, after a sufficient number of cycles through this code, the app hits the limit, just like putting too many controls onto a form. C & R fixes it. (Exactly this kind of rubbish is what gives me a rash about Microsoft - instead of coding it properly to begin with, or fixing it when the problem is uncovered, they leave in such garbage and rely on a patchwork solution like C & R.)

I was thinking about putting a counter in this code, and do a C & R on close when the counter reaches a certain number, but I also wanted to be able to detect if the user had done a C & R manually, so I could reset the counter than as well.

However, that's probably not a good approach overall. I think I will simply recode that module to do what I was doing with the autonumber field some other way.

(BTW, how do you "always replace the FE each time it is opened"? How would you do that, and why?)
 
Hm. The range of values for the Long Integer, which is the data type for AutoNumbers, is -2,147,483,648 to 2,147,483,647.

So even if your user created only positive values, that is 2 billion, 147 million, 483 thousand, 647 records possible. Are you sure the user was able to create that many records in one day? That would be an extremely heavy load indeed, it would seem.

Unless the user could show you a table with an AutoNumber that large, I'd look elsewhere for the actual problem. There is one, no doubt, but this may be the wrong path.
 
(BTW, how do you "always replace the FE each time it is opened"? How would you do that, and why?)

The overview is that you build a very short batch job to copy the "master" FE to overwrite the prior copy, then you launch what you just copied. You do it as a way to automatically update the FE so that if you have "fixed" something, everybody gets the fixed version on the next launch. As a side effect of this approach, even if you DIDN'T make an update, the users get fresh copy of the FE that has not yet had a chance to get bloated by use.

Since the FE is where "the action" occurs, it is also where temporary lists and other transient data structures are built during queries, particularly if you have layered queries. Those temporary structures contribute to bloat.


This might help. The article is #7 of the indicated thread. It shows a simple four-line batch job and discusses the subject in greater depth.
 
Hm. The range of values for the Long Integer, which is the data type for AutoNumbers, is -2,147,483,648 to 2,147,483,647.

So even if your user created only positive values, that is 2 billion, 147 million, 483 thousand, 647 records possible. Are you sure the user was able to create that many records in one day? That would be an extremely heavy load indeed, it would seem.

Unless the user could show you a table with an AutoNumber that large, I'd look elsewhere for the actual problem. There is one, no doubt, but this may be the wrong path.
No, you misread my post. I did not overrun the maximum allowable number IN an autonumber field - I reset the autonumber field to start at 1 again.

The clods at MS somehow coded this action to not actually reset the numbering, but instead create a NEW autonumber field in the same table, while leaving the old one hidden somewhere in the guts of the table. Tables can only have 255 fields, apparently including such debris.

I work with this user pretty intensively, and as he is constantly coming up with new ideas, or changing his workflow and thereby his requirements, he gets a new version quite often, and of course, a new version is always freshly compacted by me before shipping, so this has never come up before. But in the last few days, he came up with a project that suddenly made massive use of the code that does this reset, and for the first time in the life of the project, managed to overrun the field count limit.

I had no idea Access worked this way - I assumed restart a numbering sequence meant exactly that and nothing else. (I know, assume...). I'd love to apply a steel-toed size 14 to the rear end of whoever dreamed up this bit of stupidity.
 
I'm not justifying the sloppy code but I'm sure that the developer convinced himself it would never be a problem because autonumbers have no meaning and so resetting them over and over again would never happen.
 
I'm not justifying the sloppy code but I'm sure that the developer convinced himself it would never be a problem because autonumbers have no meaning and so resetting them over and over again would never happen.
"No meaning" is only one way to use an autonumber field. I was using it as a sequence number - seemed a handy, no-code way to get sequential numbers. And it does do that, very nicely. There are no other actions that mess with this temp table, so I have no problems of missing numbers or any of the other issues that bedevil incorrect use of this feature. Trivial to code around, now that I know what the problem is, but if they allow the start number to be reset AT ALL, it seems mind-numbingly incompetent to do it in a way that crashes after a certain number of repeats.
 
The overview is that you build a very short batch job to copy the "master" FE to overwrite the prior copy, then you launch what you just copied. You do it as a way to automatically update the FE so that if you have "fixed" something, everybody gets the fixed version on the next launch. As a side effect of this approach, even if you DIDN'T make an update, the users get fresh copy of the FE that has not yet had a chance to get bloated by use.

Ah, yes. Thank you. I have looked at that mechanism, and it doesn't quite suit my purposes.

I have two main 'types' of customers. One is a multi-user environment, talking to a SQL Server backend. There of course I have separate front ends, but I use a version number. When an FE opens, it compares its own version number to the version on the server. When there is a newer one on the server, the user is asked if he wants to download the new version (default Yes). That then starts a download / replace / restart process. But even the FE is around 100MB, so I don't want to irritate the users by unnecessarily copying it every time. To update to a newer version, I just put the new version with the new number on the server, and each user can get it next time he starts.

The other type is a single-user, and there I deliver the app as a single-file setup. Splitting it would introduce pointless complexity, especially since the users often work on multiple computers, including taking a notebook (with database) into the field, to a vacation cottage, to a conference and so on. A single-file setup means they can simply copy the DB onto a flash drive and take it wherever they want - to work, to home machine, to portable notebook, even to other people's computers, if they want to consult with someone. As long as the target computer has a compatible version of Access, they're fine. When I deliver a new version to such a user, it is an empty shell, with the suffix "New" appended to the name. On start-up, the DB examines itself. When it sees all tables empty and its own name with the "New" suffix, it 'knows' that it is the new version, so it asks the user where is the prior version (with all the current data). It then reads in the data, often doing some modifications or conversions in the process, if the new version requires it, then spawns and executes a script that renames the old version with the suffix"_Archive" and a date-time stamp, renames the "New" version to the old version name and restarts the new (now full) version with the stock name. Process takes less than a minute. If it crashes (happens occasionally), the user simply lets me know what happened and continues on with the old version, as if nothing had happened. When the process completes successfully, he again continues, with the new version named exactly as the old one was.
 
Last edited:
The clods at MS somehow coded this action to not actually reset the numbering, but instead create a NEW autonumber field in the same table, while leaving the old one hidden somewhere in the guts of the table.
Why should compact the backend create a new data field?
I cannot reproduce the problem, neither with DBEngine.CompactDatabase nor with ALTER TABLE ... ALTER COLUMN ... COUNTER(1,1).
Does this only occur during automatic compact when closing?
 
Last edited:
Why should compact the backend create a new data field?
I cannot reproduce the problem, neither with DBEngine.CompactDatabase nor with ALTER TABLE ... ALTER COLUMN ... COUNTER(1,1).
Does this only occur during automatic compact when closing?
No - resetting the starting number of the autonumber field is what (apparently) creates a new column, while leaving behind some hidden dregs of the former one. C & R clears out such debris. Reset the start enough times (~255) without C& R, and you get the "too many fields" error. Try putting your ALTER ... COUNTER code in a loop, and see how many repetitions you get before it crashes.
 
Replacing the FE each time you open it has nothing to do with version checking. Version checking ensures that the user is running the application correctly and that the FE and BE are in sync.

Updating a single-user monolithic app is certainly possible but I would still split the app and have the user copy two files instead of one. Automatically connecting to the BE is easy enough if they are in the same folder. You could give him a copy script that zips the two files into a single file. In this case, in addition to the version match, I might use a table in each database with a counter that updates each time the app is opened to ensure that they are copied in sync. Then when the counter is zero in the FE, the BE is resync'd with the new FE.
 
Try putting your ALTER ... COUNTER code in a loop, and see how many repetitions you get before it crashes.
Code:
Const ResetCounterDDl As String = "ALTER TABLE table1 ALTER COLUMN id COUNTER(1,1)"

Dim i As Long
For i = 1 To 10000
    'CurrentProject.Connection.Execute ResetCounterDDl
    CurrentDb.Execute ResetCounterDDl, dbFailOnError
Next
=> no crash.

Note: Compact also resets the counter to the smallest possible value.
 
Replacing the FE each time you open it has nothing to do with version checking. Version checking ensures that the user is running the application correctly and that the FE and BE are in sync.

Updating a single-user monolithic app is certainly possible but I would still split the app and have the user copy two files instead of one. Automatically connecting to the BE is easy enough if they are in the same folder. You could give him a copy script that zips the two files into a single file. In this case, in addition to the version match, I might use a table in each database with a counter that updates each time the app is opened to ensure that they are copied in sync. Then when the counter is zero in the FE, the BE is resync'd with the new FE.
Yes, when you replace it always, you do not need to check the version. I do NOT replace it always, hence the version check. I only replace when needed, in the multi-user systems.

For the single-file configuration, I know there are other ways to do this, and I understand that a split system offers some advantages to even a single user. It also has disadvantages, the primary one of which is additional complexity. I know, I -could- provide a script to do the copying, I -could- tell the user to copy a folder instead of a file, I -could- put version numbers into both FE and BE to insure that the user didn't only copy one of the files instead of both. I -could- do any number of things to compensate for the disadvantages of a split system, but the point is, I don't need to, and I don't particularly want to. This works, the users are used to it, and everyone is perfectly satisfied. The app makes regular backup copies of itself into a subfolder, in case something gets corrupted. The user deletes those a few times per year, when they start getting too numerous.

Splitting a database sometimes makes sense, and other times not. It is not universally a good move - a single-file system is often a better configuration, primarily for the simplicity.
 
Code:
Const ResetCounterDDl As String = "ALTER TABLE table1 ALTER COLUMN id COUNTER(1,1)"

Dim i As Long
For i = 1 To 10000
    'CurrentProject.Connection.Execute ResetCounterDDl
    CurrentDb.Execute ResetCounterDDl, dbFailOnError
Next
=> no crash.

Note: Compact also resets the counter to the smallest possible value.
Well, maybe a reset doesn't create a new field if the autonumber has never been incremented? I don't know - just heard about he problem yesterday morning from the user, and haven't investigated the details myself yet, but that is the first thing that occurs to me - no reset, if the counter has not yet actually moved off the starting point. Maybe, in your loop, try adding a record to the table to trigger the autonumber incrementer and then reset?
 
Looking into this some more, it seems that it's not JUST the autonumber. Quoting directly from The Company:

learn.microsoft.com/en-us/office/troubleshoot/access/too-many-fields-defined-error

"Access also creates a new field (increasing the internal column count by 1) for every field whose properties you modify."

Words fail me...
 
Code:
Dim i As Long
For i = 1 To 10000
    With CurrentDb
        .Execute "insert into table1 (T) select top 100 T from table2", dbFailOnError
        .Execute "delete from table1", dbFailOnError
        .Execute ResetCounterDDl, dbFailOnError
    End With
Next
=>no crash.
Tested with Access 365 version 2405
Can you reproduce the problem?

learn.microsoft.com/en-us/office/troubleshoot/access/too-many-fields-defined-error
... TableDef is used and not DDL.
 
@pdanes - My first question to you would be to check the table when this is done to see if it renamed the autonumber field. Normally, you don't care about the name of the field THAT much because, as noted, autonumbers are in theory not supposed to have any meaning other than "unique record number." If there is no table that depends on the table in question (i.e. no child table exists), then there are also no links involving this field and you would have no collateral issue in a "hidden" rename.

Your question about resetting the seed number on an empty table vs. a non-empty table is a valid question. @Josef P. - if you still have the code where you can try it, insert a record before you reset the sequence number to 1. See if THAT exhibits the odd behavior.

@pdanes - my second question for you is this: When this reset operation occurs, are error messages and warnings disabled? Because I can easily believe that Access takes some kind of evasive action, but I can't believe it does so silently. If Access has ANY failings in that regard, it is frequency of error reporting and/or warnings (normally, too often).

EDITED by The_Doc_Man: @Josef P. - you posted the 2nd experiment report just before I closed my comments. So thanks for doing the test!
 
Yes, when you replace it always, you do not need to check the version.
Yes, you do. If the user makes a mistake and opens Access and clicks on the last used database, the new FE will not be downloaded and the one in the local directory will be used. This wouldn't be terrible unless this was the day that the versions changed. Better safe than sorry.

The only problem I see with your monolithic solution is that if the user downloads a new version and copies it to work offline without opening it first so it can run the copy code, he ends up off site with no data. Otherwise, as long as the data transfer is automated, it should be fine.
 

Users who are viewing this thread

Back
Top Bottom