Correct way to deal with temp tables & bloat? (1 Viewer)

rede96

Registered User.
Local time
Today, 13:39
Joined
Apr 2, 2004
Messages
134
I have a back-end / multiple front-end database set up. I need to regularly update the BE with data from an external source. The tables in the external source are linked through SQL server.

Updating directly from the external tables doesn’t really work as it is extremely slow and the tables don’t have natural primary keys I link to. So,

I’ve found the quickest way is to create temp tables and update from them.
I’ve created an Auto Update DB that runs over night however the problem is that it bloats in file size very quickly due to the temp tables. So, I must repair / compact each morning, or the update DB fails due to file size.

As a work around I set the database repair / compact on close and was going to use a third DB application to open the update DB each night, let the update process run, then close it and compact.

However, I’ve read this isn’t the right way to deal with this issue, as I should be able to do all this without the bloat / temp tables.

Can anyone help with a way to do my updates without the bloat and regular compact /repair? Then I can just set it to update each night and forget about it!
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 00:39
Joined
Jan 20, 2009
Messages
12,849
Use a "Side End", a separate local database just for the temporary tables. Copy a fresh one each time it is needed and delete it afterwards.

However I first would look closer at why your processes with the SQL tables are not performing well.
 

rede96

Registered User.
Local time
Today, 13:39
Joined
Apr 2, 2004
Messages
134
Use a "Side End", a separate local database just for the temporary tables. Copy a fresh one each time it is needed and delete it afterwards.

Thanks for the reply. Sorry, I probably wasn't very clear. The update DB I have is in effect a side end. It is the side end that grows in file size, even with deleting the temp tables. The file size can only be returned to normal via compact and repair.

If I want this side end to run automatically, then I have to have a way for it to compact and repair each night OR find a different way to do the update.

However I first would look closer at why your processes with the SQL tables are not performing well.

Not much I can do here. The update is slow because our network is slow, the SQL tables are really large and I have to create ID's links so I can update my tables where none exist in the SQL tables. So kind of stuck with this.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 13:39
Joined
Feb 19, 2013
Messages
16,553
you might want to look at your sql design - if you are using vba functions such as nz, iif, etc then it will be slow because all the data needs to be brought across the network by the ODBC driver so the functions can be applied. Would be better to use a passthrough query and utilise the TSQL equivalents. Or perhaps the sql can be redesigned to avoid vba functions.

Similarly if using aggregate queries, ensure any criteria is applied to the WHERE section of the code rather than the HAVING section (HAVING should only apply to any aggregated fields, WHERE is applied to the grouped fields).

Don't think you understood what Galaxiom is suggesting (or perhaps I'm not understanding what you are doing). If I've understood you correctly you have three 'sets' of tables, the ones in SQL server, the destination tables in your access back end and the temporary tables.

Galaxiom is suggesting creating a temporary database for the temporary tables - your front end can link to them, populate them from SQL server then using update/append queries to update the access backend from the temporary tables. Then delete the temporary database. Only bloat you would then get is if you are deleting data from the destination tables - which in many cases would not be happening.
 

Minty

AWF VIP
Local time
Today, 13:39
Joined
Jul 26, 2013
Messages
10,354
The side end database should be created and populated from your application database.

Once the updates are done you delete it completely. All it would hold are your temporary tables, nothing else.
 

MarkK

bit cruncher
Local time
Today, 06:39
Joined
Mar 17, 2004
Messages
8,178
It is the side end that grows in file size, even with deleting the temp tables. The file size can only be returned to normal via compact and repair.
The side end itself, the entire file, should be deleted, not just the tables it contains.
Mark
 

rede96

Registered User.
Local time
Today, 13:39
Joined
Apr 2, 2004
Messages
134
Don't think you understood what Galaxiom is suggesting (or perhaps I'm not understanding what you are doing). If I've understood you correctly you have three 'sets' of tables, the ones in SQL server, the destination tables in your access back end and the temporary tables.

Galaxiom is suggesting creating a temporary database for the temporary tables - your front end can link to them, populate them from SQL server then using update/append queries to update the access backend from the temporary tables. Then delete the temporary database. Only bloat you would then get is if you are deleting data from the destination tables - which in many cases would not be happening.

The side end database should be created and populated from your application database.

Once the updates are done you delete it completely. All it would hold are your temporary tables, nothing else.

The side end itself, the entire file, should be deleted, not just the tables it contains.
Mark


Right, understood. Sorry had my mind somewhere else.

Thanks for the help.
 

Users who are viewing this thread

Top Bottom