Same query takes 1 sec or 5 minutes!

DataMiner

Registered User.
Local time
Today, 06:27
Joined
Jul 26, 2001
Messages
336
I am truly stumped. I have a query that sometimes takes less than 1 second to run, and sometimes takes close to 5 minutes. I am making NO changes to the query. I first thought it was a network issue, so I moved the whole db, both front-end and back-end, offline. No change. So I thought maybe it was a linked table issue, so I moved all the tables used by the query to the front end. No change. Every once in a while, the query runs at lightning speed, less than 1 second. Most of the time, it takes 3-4 minutes, or longer.

Here's the SQL:
SELECT PCAInternalPareto1a.row, PCAInternalPareto1a.column, PCAInternalPareto1a.Noun, PCAInternalPareto1a.Rev, PCAInternalPareto1a.InspPoint, PCAInternalPareto1a.cell, PCAInternalPareto1a.DefectType, CLng(nz([firstofTruncatedOpSeq],0)) AS ZOpseq, Sum(PCAInternalPareto1a.DefectQty) AS SumOfDefectQty, PCAInternalPareto1a.poo, PCAInternalPareto1a.repcode
FROM PCAInternalPareto1a LEFT JOIN BOMOutRefDesOnly ON (PCAInternalPareto1a.RefDes = BOMOutRefDesOnly.RefDesOnly) AND (PCAInternalPareto1a.partno = BOMOutRefDesOnly.PCAItemNo)
WHERE (((nz([FirstOfComponentItemNo],0)) Like "*"))
GROUP BY PCAInternalPareto1a.row, PCAInternalPareto1a.column, PCAInternalPareto1a.Noun, PCAInternalPareto1a.Rev, PCAInternalPareto1a.InspPoint, PCAInternalPareto1a.cell, PCAInternalPareto1a.DefectType, CLng(nz([firstofTruncatedOpSeq],0)), PCAInternalPareto1a.poo, PCAInternalPareto1a.repcode;

PCAINternalPareto1a is itself a query, which always runs fast. BOMOutRefDesOnly is a table with about 84,000 records. I have indexes on the joined fields.

I know the where statement looks useless but I have my reasons for doing it this way. I tried deleting the where statement and it ran really fast. Then I re-added it, and it still ran really fast. THen slow again. I can't find any relationship between anything I've tried and the speed of the query!!!! What's going on??????

Access 2002
 
Speed of the database is often affected by unindexed tables, linked tables, or queried-queries, even a mix of tables and queries. Check each related table to make sure that all are indexed. Then make sure that only indexed fields are used in the relationships whenever possible.

Personally I always build the container tables, overlay a query which duplicates the table in query form, field for field. Then I use the queries and not the tables for all other processing. Tables are just containers for my purposes. Queries are aliases to the tables. I query a query just like I would a table, it's more flexible.

This means the table "tbl_Clients" has Query "qry_Clients" as an interface. I query the "qry_Clients" when I use it in combinaton with other tables represented by a "qry_TableName". I get my best performance this way.

The other issue may be cpu usage by other programs including the dastardly "svchost.exe"; for more about it look it up on the net.

These are just observations from my personal eperience, your results may vary,

Happy "Access"ing

Goh
 
It depends on how many cache hits you get when you're querying data. To boost this, sometimes opening the source tables before hand helps!
 
Diamond.

Can you explain why it is more flexible to run a query off a query the way you explained. I am new and was just curious as why you would do this.

Thanks
R~
 
Some examples of why it's more flexible? Hmmm...:
MY BIGGEST REASON:
When using a shared database, the overlaying query collects the data in buffered form from each input station and takes only a fraction of a second(depending on bus and CPU speeds) when the record is completed to dump the buffered record into the table. With record-locking security on the database this allows multiple users SMOOTHER access to the same TABLES simultaneously and avoids the risk of Table/Data-entry collisions that can increase exponentially with increased number of users and usage volume.

I discovered this the hard way by implementing the "direct to table" approach and had frequent complaints about Error message indicating that the record couldn't be saved because the table was locked by another user.

The person who got the message then had to escape the record and do it over again. If the user who had the table locked, paused in the middle of the input, ostensibly to check on the figures or text being entered or to answer the phone/go to the bathroom/take a break/etc., the length of time that the table was "locked" increased, thus creating additional DATA jams.

Overlying queries give you free range to reformat, rearrange, convert, and/or generally "mess-with" the actual data in various ways. You can use part of the data in a formula that makes a unique field for use in a temporary relationship, etc, without having to actually neither change the existing data nor to add another field to the table just to experiment with improvement ideas.

In my experience: A database will lead its developer to form conclusions about the structure that will make him want to tinker with things in hopes of improving performance or ease of use. Queries give him the flexibility to tinker and change the way the table is represented without changing the table itself.

I hope that is vague enough to clarify my answer ;-)

Goh
 
I think that was a pretty good explaination Goh. Thanks for taking the time. For the record, SQL server as a BackEnd *requires* that all access be through queries or "views". It is simply a good habit to have.
 
The cache thing that mousie mentioned makes sense. A query of mine was running slowly yesterday but suddenly it has perked up. I do recall looking at the underlying table for a little while today, so maybe more of the records were in cache. I've been messing around, trying to flush the cache, but I have yet to get the 10 minute wait I was seeing yesterday.

In any case if a query is running slowly then it is probably analyzing a lot of records (10000+ for my query). If possible, try not to analyze them all at once. My users only need to see 20 out of the 400 results I'm generating so I add an extra criteria to the WHERE clause. Now my query only generates what they want to see. I'm not using a filter by the way. Filtering only occurs after the query is run.
 
Memo to self: Start using query surrogates for tables from now on.

Much obliged GohDiamond :)
 

Users who are viewing this thread

Back
Top Bottom