How can I improve the speed of an UPDATE query

Sean_F_Howard

Registered User.
Local time
Today, 16:23
Joined
Dec 4, 2006
Messages
10
I have a (quite) specific question but I thing it covers something I simply cannot answer.

I have three UPDATE queries running on linked tables in Microsoft Access (2000/XP).

My main data table (the one to be updated) has almost 1million records

My three information tables ALL have primary keys (which are used to link the main table) and vary in size

I have atatched the three UPDATE queries plus descriptions of the field names used.

Code:
	Table			Records		Time
[LIST=1]
[*]Main DataTable		900000
[*]Mask nomenk		130		2 hours
[*]Mask media		900		15 minutes
[*]Mask brand		4000		?????
[/LIST]


[B]Query A[/B]
UPDATE [Main DataTable] AS z
INNER JOIN [Mask nomenk] AS n ON (z.nomCode1 = mn.nomCode1) AND (z.nomCode2 = mn.nomCode2) AND (z.nomCode3 = mn.nomCode3) AND (z.nomCode4 = mn.nomCode4) 
SET z.NomenkMask1 = n!NomenkMask1;

[B]Query B[/B]
UPDATE [Main DataTable] AS z
INNER JOIN [Mask media] AS mm ON (z.couCode = mm.couCode) AND (z.nomCode1 = mm.nomCode1) AND (z.pubCode = mm.pubCode) 
SET z.MediaMask1 = mm!MediaMask1;

[B]Query C[/B]
UPDATE [Main DataTable] AS z
INNER JOIN [Mask brand] AS mb ON (z.couCode = mb.couCode) AND (z.nomCode1 = mb.nomCode1) AND (z.brCode1 = mb.brCode1) AND (z.brCode2 = mb.brCode2) 
SET z.BrandMask1 = mb!BrandMask1;


[B][U]Fieldname[/U][/B]	[B][U]FieldType[/U][/B]
couCode	Text
pubCode	Text
nomCode1	Long Integer
nomCode2	Long Integer
nomCode3	Long Integer
nomCode4	Long Integer
brCode1		Long Integer
brCode2		Long Integer


My problem, quite simpley is the speed involved with running these queries. I know that query b) is the quickest with query a) a distant second (I could not even complete the running of query c) and killed it after 6 hours.

What I need to know is WHY is queryC soooo much slower than queryB when the only realy diference that I can see between them if the latter has an extra field to join on
 
Simple Software Solutions

A few things spring to mind

One: You may need to build indexes on your brCode1 & brCode2 fields

Two: Why are you not using SQL instead of Access?

Three: Have you tried to run a compact and repair lately? Word of warning if the data is sat on a server and the size of the mdb is large drop it to a local pc before running the compact and repair. This could also be the source of the problem, performing action queries with large datasets across a network is fraught with danger as the caching is greatly affected especially if you are trying to perform it a high peak times.

Four:
If it is possible I wold suggest you perform the updates to a temp table then run an append query to the main table, that is if the logic allows.
 
can you clarify why you need to update 1m records in the main table - what are you updating?

if you can identify links between the main table, and the look up table(s), then you dont need to store anything in the the main table - just get the info you need with the relevant query whenever you need it - and if the lookup data needs changing, just change it in the lookup table - one write only

and then if you restrict the number of rows you are retrieving from the main table (eg by date, or nom code), it wont take as long to extract your dataset

or am i missing something?
 
Firstly the reason I need to update the main table (gemma-the-husky) is that the combined result (main table plus changes) is used in many places throught my database and if I have to ru the updating every time on the fly, it will make all other parts of the database unworkable

Secondly I cannot restrict the rows being updated.


Sean
 
DCake,

One: An index already exists using the brCode1 & brCode2 fields

Two: I don't know, you tell me.

Three: Yes and it made no change (the 1million record database is already compacted)

Four: Interesting idea, but this would not be possible (or as you say logical)


Fine effort though, chap
 
Last edited:
its your system, and i take your word for it, but i cant really envisage any system where you have to regularly go back and update old records - surely thats what normalisation is for - so you dont have to do that

as i say, most things like this are handled by creating a query with the information you need when you need it, with whatever subset of data you require

thats not long-winded, its how relational databases should work
 
Unfortunately it sounds to me, and I'm not trying to be rude or anything, that the design of this database is severely flawed and needs some redesign. If you are suffering from performance issues and you have to update all records, as gemma-the-husky said you should only be working with a subset of data, Jet will let you pull the ones required and work on them, not needing all of them.

Unless I'm missing something, and it is totally possible but unlikely, 1 million records should not need to be accessed every time like this.
 

Users who are viewing this thread

Back
Top Bottom