Run-time error '3183': (1 Viewer)

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Hi,

I'm developing an Access application that pulls schedule data from an oracle database and processes the data, filling in gaps in the schedules.

Ultimately the application will process two years worth of schedule data for tens of thousands of employees where some employees can have multiple shifts in a day with a schedule that updates weekly, while others have a fixed schedule that has been in place for years.

The end result will be a daily schedule row per employee showing number of hours worked that day to be used in various reports.

To get around the 2Gb Access database limit, I have developed a set of backend databases with a master front end that moves data from pre-processing to processing to processed.

I'm getting the following error when running the application:

Run-time error '3183':
The query cannot be completed. Either the size of the query result is larger than the maximum size of a database (2 GB), or there is not enough temporary storage space on the disk to store the query result.

I had this error initially and further split out backend tables into addtional separate databases, but am still getting the error.

I've been Googling this and other forums looking for a solution. I've tried the suggestions in this post:
http://www.access-programmers.co.uk/forums/showthread.php?t=96471&highlight=3183

I can't see any files that would be relevant in either directory:
C:\Windows\Temp\*.*
C:\documents and settings\username\local settings\Temp\*.*

I'm developing in MS Access 2010 on a client laptop running Windows 7 service pack 1, several hundred Gb free disk space, 4Gb RAM, 64 bit OS, i5-4300U CPU @ 1.90GHz

The front end database is only 2Mb. Currently the largest backend database is 500Mb. Paging file size is 3783Mb

The most recent time I received the error, none of the database sizes were approaching 2Gb. When I clicked debug, the code broke on:
Set rstTOP = cdb.OpenRecordset("qryProcessTOP", dbOpenDynaset)
I tried running the select query (while in debug mode) and got the same error. I killed the code. Tried running the query again and it was returning fifty rows on one column only. No table joins.

This implies to me that this is a memory issue, or Access Temp file issue.

Is there a way to find out what is taking up the space in the Access temp file (and even, where the Access temp file is/what it's called?)

Or would anyone have any other suggestions?

Thanks

Mark

:banghead:
 

spikepl

Eledittingent Beliped
Local time
Today, 21:06
Joined
Nov 3, 2010
Messages
6,142
Dunno if this is relevant in your case but I've seen something like this when running UNION queries without ALL. Adding ALL cured the problem.
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Hi Spikepl,

Thanks for the reply. The query I'm trying to execute when I get the error is a simple select query, but there are union queries in other parts of the process. I'll try modifying them.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 20:06
Joined
Sep 12, 2006
Messages
15,710
I presume it's the first bit of this. The query will be a temporary object of over 2Gb.

Run-time error '3183':
The query cannot be completed. Either the size of the query result is larger than the maximum size of a database (2 GB), or there is not enough temporary storage space on the disk to store the query result.

Why would you need 2 years of old data to get a single days/weeks results?

Can you not pre-filter the history table at an earlier stage, with a query. Put the extract in a temp table, if you have to.
 

spikepl

Eledittingent Beliped
Local time
Today, 21:06
Joined
Nov 3, 2010
Messages
6,142
Your little SELECT couldn't possibly casue all that damage, so something must have accumulated ...
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Hi Dave,

The client requirement is for a report providing schedule hours for each employee over two years. Some employees have a schedule for a week which once set never changes. (some going back a decade or more) That employee may have a maximum of seven rows of data. Other employees have constantly changing schedules with multiple shifts per day. Those employees can have hundreds of rows of data for a given month.

If it were just a case of returning data I could use a pass thru query, but I haven't thought of a way to return schedule data for a current year when there is only one set of schedule data relating to a decade ago.

By bringing the schedule into Access I've been able to loop through a month of days, converting the historic schedule into a current one.

It is slow though, any suggestions for improvements would be helpful ;)
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Dave,

Getting off topic now, but I've been refactoring as I've developed and maybe it would be possible to make large parts of the process redundant.

One possibility would be to use some form of cartesian join. Return the schedule start date, day of schedule and the date of the next schedule that replaces the schedule being returned. (Lots of schedules overlap which has meant I need to remove out of date schedules)

If the above data was joined to a query that returned one row for every day, where (schedule start date > = day) and (date of the next schedule > day)
that could work...

Will have to try that. Could save a lot of processing time.
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Hi SpikePL,

Yeah, that's what I'm wondering. I've been careful to close all open recordsets as I finished looping through employees/months, but is it possible that closing them and setting them to nothing does not free up memory - so since I'm processing large numbers of records, the space used in memory / temp space (however Access is storing it) is being eroded?
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 14:06
Joined
Feb 28, 2001
Messages
27,319
I might respectfully disagree with Gemma on which part of that error message is the relevant part. I think it could be either. I could see a case - and can't tell from anything I see in the thread - where the relevant part is "there is not enough temporary storage space on the disk to store the query result." (My logic on that one is at the bottom of my response.) However, I can also see Gemma's thought here and absolutely can agree that it COULD be that the intermediate result is bigger than 2 Gb.

Depending on how you wrote the query, it is possible that you are getting caught on internal temporary query lists. In order to do a select query (at least as I understand the flow of Access), you first build a list of qualifying records. If it happens that you have nested queries, the intermediates can get quite ugly. For example, let's say you are doing a 3-way join. Often, you cannot do a straight-up 3-way but instead must choose to do a 2-way join and then form the final join with the third table and the two-way join query as your two components.

If that happens, then the 2-way join has its own list (that must be built first) and the 3-way join has another list. Depending on how often you use queries as record sources in your main query (due to those pesky JOIN rules), you may have some really huge lists. The solution is going to be to carefully select the intermediate query combinations so that you can apply WHERE clauses to the intermediate query - thus hopefully reducing the size of the intermediate list. If you have the case that I just described, one symptom you might find is that it is going to be necessary to compact and repair your database more often.

There is another question that I cannot answer because here, I'm a bit out of my depth. Perhaps another forum member can answer for you. (And if so, we will BOTH learn something.) When you open database A and then use queries to open databases B, C, D - or even if you just link the tables to A - and you execute a query where the extra databases are involved by linkage rather than by queries with explicit "IN file" clauses, where do the temporary tables go that I described earlier?

That is, if I have to build that intermediate list, does it always go into database A or can it ever spill over into one of the others? (I believe the answer is "NO" but can't prove it.) If the intermediate files all go to A then eventually you will fill up A will those intermediates. That is why I suggested trimming the intermediates with WHERE clauses to keep that list smaller.

It might even be necessary to have A be like a front-end file with NO local data of its own so that you maximize the free address space for your intermediate tables, but I can't guarantee that it would be enough unless you can also do things to reduce the preliminary intermediate list size.

The case where the memory on your system becomes relevant (my contention that differs from Gemma's viewpoint) is that your page file size is 3.7 Gb and you are looking at multiple 2 GB databases. When you open the database in a way that would cause it to be scanned for a query, you run into the COMMIT/CHARGE issue. Even though that chunk of the database is in a file, I think you have to commit paging space for it because Access has to bring it into memory to work on it. Based on your discussion of your structure, each such database is 2 Gb of data space.

If you have a 2 GB main database and 2 more auxiliary DBs of 2 GB each, that's 6 GB. Add the space required for the Access program itself (which is external to the DB), add in the Windows permanent stuff, and then add the virtual DATA spaces implied by opening large DBs and suddenly you realize that you have just topped out your page file by a bunch.

You can test whether I am right by running Task Manager in a small window where you can seen the stats for physical memory, kernel memory, and system memory. The Commit number under system memory is going to be the one to watch. Open your database as other than full-screen so that you can watch the stats on Task Manager. If system memory "COMMIT" tops out at 3.7 Gb then your problem is SYSTEM virtual data space, not necessarily ACCESS virtual space.

You can try to expand your page file though you might have to defrag your disk in order to get enough contiguous space. (And there is no guarantee that it is even possible depending on the size and "clutter factor" on that disk.)

I would also point out that if you have a really complex data set that comes from a flat-file (EXCEL, e.g.) origin, you might gain a LOT of ground by looking at whether your datasets can be normalized better to reduce their size in some way. This is a case where a little goes a long way because of the large number of records that would be affected by normalization.
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Hi Doc Man,

That is a useful reply, thank you.

While none of the backend databases are approaching 2Gb in size while running the process, together they are greater than 1Gb just with the data. I would like to know and be able to test whether the query result sets are increasing the front end or back end temp file size.

The paging file limit is certainly a possibility. I will check that out as well,

Mark
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 14:06
Joined
Feb 28, 2001
Messages
27,319
To test the front-end file, be sure you have the task manager "processes" list up and see if the amount of memory associated with the MSACCESS task (under memory usage) is the one that is growing or the System Memory COMMIT space is growing. It is convenient that both task memory and system memory are visible on the same tab of the Task Manager. The MSACCESS task is the correct name of the .EXE fil and is what you will see in the processes list.

As to the total size of the static databases, that is immaterial to a very large degree. What matters is that the compacted databases are the SMALLEST amount of memory that must be committed when you open the file. I can be wrong on this but I don't think I am... Windows will not bring something into memory until it knows ahead of time (by committing page file space) that it could do a page-in or full-blown in-swap. I also believe that if you start growing one of those intermediate tables, your performance will radically suck for the duration of the time that the page-file space is re-allocated. And if it should happen that you have to shuffle the page file to gain contiguous space, you will be looking at one of the great wonders of the world - the one that says "I wonder if I'll ever hear from this system again."
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 20:06
Joined
Sep 12, 2006
Messages
15,710
I have no issue with the DocMan suggesting that it might be a temporary storage problem.

As an alternative, can you run this for a single employee, rather than all employees?

In which case maybe you can do this

1. have a select query of the employees in which you are interested.
2. iterate this record set.
3. on each iteration write the output results to an output table
4. use the output table for final reporting.

This would hopefully avoid the need to produce mega-large record sets.
You also have an interruptible process, so you can add a useful progress monitor to step 2.
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
SpikePL,
Turns out I only had two union queries in the whole application. One
wasn't relevant to the process causing the error as it runs at an
earlier stage. Changed the other to UNION ALL.
Not sure what effect this will have...
My understanding of UNION ALL is that it returns ALL records as per:
https://msdn.microsoft.com/en-us/library/bb208962(v=office.12).aspx
"By default, no duplicate records are returned when you use a UNION
operation; however, you can include the ALL predicate to ensure that
all records are returned. This also makes the query run faster."
While it is helpful to have the query run faster, will returning
duplicates possibly use less virtual/temp memory than excluding them?

Doc Man, I ran into another problem trying to reset the data to
retest. I've fixed it now, but am wondering if the cause of the
problem was related/causing the main issue of out of memory.
In order to cut down the volume of data I've taken several passes at
cleansing the data before it is processed. Removing all irrelevant
schedules and removing duplicates/cross over schedules. (The system
has some employees with a schedule that runs for a year, but is
updated weekly, therefore there are a large number of out of date
records.)

My query to remove these duplicates joined on two sub queries and was
taking a couple of minutes to run each loop (and I was looping
through batches of 50/100 employees trying to find the most efficient
way to process them). Then, when retrying to test, the query
generated an error, the only difference being the unneeded records
had already been deleted.

All that is probably TMI without enough detail...

Anyway, I've refactored the delete crossover schedules query/process and that now runs for all employees/schedules at the start of the process and is much faster.

I've recombined the three main backend databases into one to try and avoid three backends being opened at once.

Retesting, I got three times as far into the process before getting the same error as originally. This time the error was on line:
Code:
DoCmd.RunSQL "DELETE * FROM PreProcess WHERE EMPLOYEEID <=" & Chr(34) & strEmpID & Chr(34)
I tried modifying the line while in debug to
Code:
cdb.Execute "DELETE * FROM PreProcess WHERE EMPLOYEEID <=" & Chr(34) & strEmpID & Chr(34), dbFailOnError
but this threw up a slightly different error: Run-time error '3035'
System resource exceeded

Searching this I found several forums recommending upping the page locks value, so have added this:

Code:
 DAO.DBEngine.SetOption dbMaxLocksPerFile, 1000000

With reference to the attachment: KillChrome.png, checking the Task Manager while running the process and looking at file sizes, I couldn't see any difference in System Commit while the process was running other than when I shut down Chrome which resulted in the dramatic drop in the history graph.

The Free Physical memory did approach zero at times but always had similar values in cached and Available. Commit never went above 4000.

The file sizes of the backend database and front end did together approach 2Gb and I've added a check that where backend DB size goes above 1.3 Gb it will be compacted/repaired.

Am going to give the process another run this evening and will feed back.
 

Attachments

  • KillChrome.png
    KillChrome.png
    74.8 KB · Views: 478

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Hi Dave,
That step suggestion is very close to the process I'm following. I'm indeed only looking at one employee at a time, writing their schedules to a separate table and closing it. That final table/Db is the only one that is not getting bloated.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 20:06
Joined
Sep 12, 2006
Messages
15,710
one other thing is maybe somewhere within your queries you have a Cartesian/cross-product join in a query, where you haven't joined keys correctly.

if you have 1000 rows in 2 tables, and don't join them correctly, you may get a cross product of 1million rows. This could well exceed system resources, and may get filtered out by subsequent actions, so you don't realise it is happening.

can you time the queries, and see if one particular step is taking a long while to complete. That might be a good point to investigate.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 14:06
Joined
Feb 28, 2001
Messages
27,319
The "Out of resources" item is a bit awkward to fix because you have to modify some things in the system definition files. In simplest terms, a Windows Resource is a read-only item used by programs. There is a table of slots for pointing to resources. It has a finite capacity. The resources themselves can be in individual files, perhaps even embedded in those files, but they are registered in the resource table. If you fill up that table, you are out of resource slots. I'm not a fan of the error message Microsoft used, because you can never actually run out of resources until all your disks are full. But you can run out of the slots in the table that registers the resources used by a given program.

My thinking is that if you collapsed three files into one and got three times farther in the process, you are running out of Access address space, particularly if the COMMIT didn't change much but the individual task memory DID change to approach 2 Gb.

Dave's comments regarding Cartesian JOINs is also spot-on as a trouble source. The other question I have is whether your WHERE clause involves a primary key or not. The amount of space needed to store an intermediate query list is less if you are using the PK as opposed to a non-key field because you can store data from the index in the former case. That index is inherently smaller than the records for the non-key case.
 

frozbie

Occasional Access Wizard
Local time
Today, 20:06
Joined
Apr 4, 2005
Messages
52
Update as of Wednesday AM. I've further refactored queries being used in the process to minimize use of sub queries and replaced the ultimate code used to write the schedule (using recordset.add / .update) with a db.execute sql where the sql is a dynamic append/insert values string.

I set this off yesterday evening. I'm getting a different error now... Run-time error 3073 Operation must use an updateable query. I think this is because the code is now running fast enough that the table is not released before the next append runs. I can handle that by error trapping or checking for presence of a lock file (or maybe by bringing that final table into the main backend database as well.)

Anyway, I'm yet again further on in the process without the original error. Interestingly, the changes I've made have resulted in the backend database remaining at a consistent size while the front end has ballooned to 2,097,152 KB. I had expected the process to crash when the front end reached the 2GB limit, but it is still going.

My understanding was that Access is quite poor at managing it's file space, that updates, deletes etc do move / delete the data, but don't free up the file space. However, I seem to be observing a situation where the absolute limit of Access size has been reached and the updates / deletes etc are still running, but can't now increase the file size any more.

It's ugly and I want to take the process to a point where the front end retains it's minimal size, but if the process completes that is a major step forward.

Although I don't think there's liklihood of the queries resulting in cartesian joins, I will check that out, and look at indexes/primary keys. Interestingly, I removed indexes from some tables earlier as it seemed to be slowing the process down...
 

spikepl

Eledittingent Beliped
Local time
Today, 21:06
Joined
Nov 3, 2010
Messages
6,142
While it is helpful to have the query run faster, will returning
duplicates possibly use less virtual/temp memory than excluding them?

Imagine you have a bunch of records, not indexed, and UNION checks that there are no duplicates. This entails, conceptually, checking each record against every other record! With ALL added, this entire process is skipped. I have had a crash on this (System resources something-or-other) which ALL fixed.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 20:06
Joined
Sep 12, 2006
Messages
15,710
Update as of Wednesday AM. I've further refactored queries being used in the process to minimize use of sub queries and replaced the ultimate code used to write the schedule (using recordset.add / .update) with a db.execute sql where the sql is a dynamic append/insert values string.

I set this off yesterday evening. I'm getting a different error now... Run-time error 3073 Operation must use an updateable query. I think this is because the code is now running fast enough that the table is not released before the next append runs. I can handle that by error trapping or checking for presence of a lock file (or maybe by bringing that final table into the main backend database as well.)

Anyway, I'm yet again further on in the process without the original error. Interestingly, the changes I've made have resulted in the backend database remaining at a consistent size while the front end has ballooned to 2,097,152 KB. I had expected the process to crash when the front end reached the 2GB limit, but it is still going.

My understanding was that Access is quite poor at managing it's file space, that updates, deletes etc do move / delete the data, but don't free up the file space. However, I seem to be observing a situation where the absolute limit of Access size has been reached and the updates / deletes etc are still running, but can't now increase the file size any more.

It's ugly and I want to take the process to a point where the front end retains it's minimal size, but if the process completes that is a major step forward.

Although I don't think there's liklihood of the queries resulting in cartesian joins, I will check that out, and look at indexes/primary keys. Interestingly, I removed indexes from some tables earlier as it seemed to be slowing the process down...

I think if you are getting to 2GB it will probably stop working. Who knows what errors you will get at that point. I tried to merge some databases as a project, which caused the database to reach 2Gb. At that point the dbs just became unusable. Tables could not be extracted and imported.

It depends how big your oracle dataset is, but I expect the underlying problem is more to do with the access manipulations.

just to clarify:

do you import the oracle tables into access? At that point, how big is the access database, before you try to process it at all?
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 14:06
Joined
Feb 28, 2001
Messages
27,319
Run-time error 3073 Operation must use an updateable query

This is not due to a table not being released. It is probably due to the presence of an aggregate in the query. Simple case: If I have a query that contains a SUM(x) field and and a GROUP BY y clause, the query cannot be updated because if you try to update a record, which one will you choose? Suppose, for S&G, that you have 10 records with "y" in that value being grouped. The UPDATE wants to only do one record update and this situation imposes a 10-way ambiguity. You would need to update the underlying table or query with a WHERE <field> = "Y" to catch all 10 fields.

There are other reasons that would cause this including queries based on a one-many JOIN where you didn't select a field from the MANY side to help choose which unique record to update.
 

Users who are viewing this thread

Top Bottom