Keeping file size to a minimum (1 Viewer)

rede96

Registered User.
Local time
Today, 20:07
Joined
Apr 2, 2004
Messages
134
Hi All,

I have a backend database that in itself doesn't grow in file size very much with normal use. However within the database is a table that updates overnight with a snapshot of various production information taken from an external source.

The update runs from a separate database and basically deletes all the current data in the backend table and then populates it with the current day's snap shot data.

The problem is doing things this way means the backend file size grows quite quickly. Currently I have to compact the backend each night, if I don't then within a few days it's file size has grown to over 2gb and it doesn't work. Once compacted the file size is always roughly the same.

Is there another way to delete data from a table and then re-populate without the file size growing each time?

Probably not, but thought I'd ask anyway :)

Thanks
 

isladogs

MVP / VIP
Local time
Today, 20:07
Joined
Jan 14, 2017
Messages
18,209
By definition, adding data has to increase file size,
The issue is whether the increase can be minimised.

You can use a combination of append new records and update existing records but
a) it's slower
B) it causes a similar amount of bloat (despite many articles which indicate otherwise)

However you can do the processing using a temp table in a separate 'side end' which can then be compacted using code ... or use a 'side end' that is freshly created and then destroyed each night
 
Last edited:

rede96

Registered User.
Local time
Today, 20:07
Joined
Apr 2, 2004
Messages
134
By definition, adding data has to increase file size,
The issue is whether the increase can be minimised.

You can use a combination of append new records and update existing records but
a) it's slower
B) it's causes a similar amount of bloat (despite many articles which indicate otherwise)

However you can do the processing using a temp table in a separate 'side end' which can then be compacted using code ... or use a 'side end' that is freshly created and then destroyed each night

Thanks for the reply. A side end sounds like a good solution. There are a lot of other temp tables I could put in that which would help maintain the file size of the update database. Ultimately I’m looking for a solution I can just run without any manual intervention to compact & reset file size. It’s difficult to do this with main back end as we operate 24/7 so someone is usually locking the back end. A side end gets around this as the tables I need to update aren’t locked by open forms.

I suppose ideally all forms should be unbound and only updated after changes. Something for the next version I think :)

Anyway, thanks again.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 14:07
Joined
Feb 28, 2001
Messages
27,146
Just a minor clarification:

Even updates of existing records cause databases to grow in size because what happens is that the record being updated is written as a new record which is then threaded into the table at the same time that the prior version of that same record is removed. A lot of people probably think of tables as being neat, compact little arrays of fields such as are shown in a datasheet view.

But... tain't necessarily so. The records in a table are written in the chronological order of creation, even if that is not the order of their prime keys (assuming they even have one). Access uses "sparse" programming techniques to assure that they only use as much space as the actual record needs. In this context, "sparse" means you don't store what isn't there. So if you have a short text field that allows 255 characters but the content is only 55 characters, you don't write a record with 55 characters and 200 ASCII NUL characters.

Therefore, if you update a record in a way that causes it to lengthen one of its short-text strings, you cannot re-use the same space it occupied before the change. It is no longer big enough. This leaves a memory gap of exactly the same size as the old version. These little gaps are probably not large enough for anything to use. And in fact, they cannot be reused since deleting that record doesn't guarantee that it wasn't still in use. (Have you ever run code and then suddenly noted that a previously-opened datasheet or form now shows a bunch of #Deleted records?)

On the other hand, Colin's suggestion of having a small, dedicated database file to hold only that table? Makes perfect sense. Further, it shouldn't be that hard to simply erase that table, compact the dedicated file (because it will be SMALL at that point, being nearly empty except for the hidden system tables), and then do your import.
 

Mark_

Longboard on the internet
Local time
Today, 12:07
Joined
Sep 12, 2017
Messages
2,111
Thanks for the reply. A side end sounds like a good solution. There are a lot of other temp tables I could put in that which would help maintain the file size of the update database. Ultimately I’m looking for a solution I can just run without any manual intervention to compact & reset file size. It’s difficult to do this with main back end as we operate 24/7 so someone is usually locking the back end. A side end gets around this as the tables I need to update aren’t locked by open forms.

I suppose ideally all forms should be unbound and only updated after changes. Something for the next version I think :)

Anyway, thanks again.

Two items...
First, if your users are using a local copy of the FE for all activities and you copy a new one EVERY TIME they open the "DB", you could have all of your temp tables in the front end. When the update is run, the front end gets the bloat. Next time the user logs in, POOF, new front end without any of the bloat.

Second, why would you want to move to unbound forms? This removes most of the advantages of using ACCESS in the first place.
 

Lightwave

Ad astra
Local time
Today, 20:07
Joined
Sep 27, 2004
Messages
1,521
rede its not clear from your post but you might be interested to know that there is a way of automating the compaction of MS Access databases - you have to do it via an external script because MS Access closes down as part of the compact procedure and any vba script running within an MS Access database would stop running as part of the compact. Anyway I repeat here in case you haven't already figured this out.

Firstly create the following function in your target MS Access Database.

Code:
Public Function CompactDatabase()
Dim vStatusBar As Variant
DoCmd.SetWarnings False

If FileLen(CurrentDb.Name) > 2000000 Then
Application.SetOption ("Auto Compact"), 1
Application.SetOption "Show Status Bar", True
vStatusBar = SysCmd(acSysCmdSetStatus, "The application will be compacted on close during compaction please do not interrupt")
Else
Application.SetOption ("Auto Compact"), 0
End If

End Function

Next open up notepad paste in the following code save it as a txt file and then in explorer edit the suffix to vbs. Then simply double click on the file within explorer to run it. You should see the MS Access database you have identified in the VB script open and then immediately close at which point it carries out the compact. The eagle eyed will have spotted that the above function doesn’t actually perform a compact. All it does is set the compact on close option to True within Access settings. The VB Script then instructs the database to close and Access compacts the database on exit.

Code:
set oAccess = createobject("Access.Application")

oAccess.OpenCurrentDatabase "C:Path\TargetDatabaseyouwantCompacted.mdb"
oAccess.visible = true
wscript.sleep 1000
oAccess.Run "CompactDatabase"
oAccess.closecurrentdatabase

set oAccess = nothing

A VB script such as this could be used to remotely run any MS Access function or functions – simply substitute the “CompactDatabase” parameter (or add further run commands) with the name(s) of the function(s) in the database you wish to trigger and then run the VB Script. Be warned the speed at which functions and commands within functions run may vary when called from a vbscript so you may have to use some experimentation to input things like pauses and waits if you start to get very imaginative with the functions run.

The VBScript can be run automatically through windows scheduler

Using Windows Scheduler to run Visual Basic Scripts

Ultimately I have found that once you start down the road of collecting streamed data automatically from machines size becomes an ever increasing issue. Sizes I hadn't remotely reached in a decade of use of manual entry I hit in a couple of hours using automation :). I have started experimenting somewhat with this and my current set up is do all the calculation locally in MS Access and then upload to SQL Server Express any cleaned data then wipe data locally in the MS Access for re-calculation of the next batch. Database file sizes are for SQL Express are. Another option I have used is to store the calcultation data in monthly access databases moving to a new time period for new data (eg Quarterly / bi monthly / monthly / daily )

Microsoft SQL Server 2005 Express edition has a database size limit to 4GB
Microsoft SQL Server 2008 Express edition has a database size limit to 4GB
Microsoft SQL Server 2008 R2 Express edition has a database size limit to 10GB
Microsoft SQL Server 2012 Express edition has a database size limit to 10GB
Microsoft SQL Server 2014 Express edition has a database size limit to 10GB
Microsoft SQL Server 2016 Express edition has a database size limit to 10GB

There are a few gotchas between integrating SQL Express and MS Access that you need to think about at the beginning. Primary key - MS Access can't handle SQL Server Big Int so Long Integer transfers to Integer in SQL Server - I don't have a copy of MS Access 2016 but I believe this was fixed in MS Access 2016 but note the run time I believe of Access 2016 DOES NOT support Big Int strangely

Another catch I came across is that some database drivers struggle transferring memo fields over a certain number of characters into Varchar(max) on the SQL server express side. This is a problem if for example you are trying to write to a varchar(max) field using a DAO.recordset. I believe ADO can handle this.

Manipulating large datasets using VBA by dragging down to MS Access can be slow so experiment on small datasets to get working and then ratchet up the size also think about using stored procedure and trying to convert any data manipulation using recordsets into SQL Server stored procedures. Of course I am better at VBA than SQL Server so I am a fair way off having the knowledge to be able to do everything I can with DAO.recordsets in SQL so far I am happy if I can get it working initially with large datasets fully automated and just accept that a calculation may take 7 hours - Once working I focus on optimization. You appear to be following the exact same path so good luck.! When calculations take hours rather than minutes time becomes valuable hence experimentation on small datasets is so key.

These are not insurmountable issues and for zero cost what do you expect. I haven't hit the need for anything over 10GB per table (you can use the same trick of splitting up tables into databases to extend "database" size in SQL Express as people suggest in MS Access)
 
Last edited:

Users who are viewing this thread

Top Bottom