Getting Run-time error '3625' (The text file specification 'x' does not exist...) - but it does.

Papa_Bear1

Member
Local time
Today, 08:42
Joined
Feb 28, 2020
Messages
86
I've been running some import code successfully for a long while - like hours.
This import step/process is being repeated many hundreds of times. (I'm looping through hundreds of files in a folder, pulling data in from them basically.)

Suddenly, I see this error 3625, with a misleading message about how the import specification does not exist.
It does exist. I can even manually import using it just fine.
Upon closer inspection - with a break point, I am finding that upon restarting things, it will successfully import 2x, and then on the 3rd, it throws this error.

Again, the Import Specification is there and works. (I even double-checked the system table and it is there.)
I've also simply tried creating a new/separate Import Specification, but it is doing the same thing.

Any advice on why this might be happening - or has anyone else seen this and identified a fix?
 
Maybe an error in your syntax that is making the text file spec name look wrong
 
This reeks of being a memory leak. You say you are running the same procedure over and over. Well, a tiny leak eventually becomes a problem. Start clean. Shut down and reboot.
 
Since you have an import spec, I presume you are using transfertext or similar

Might be worth considering using sql instead?, no need for an import spec
 
Access and other Office programs are notorious for having issues with accumulating "junk" in memory. The questions you are seeing all make a big difference in one way. We need to know by what method you get data from the hundreds of files to your Access DB. The problem is to find out which resources you are using. I have to admit I would have expected a different error message, though.

It might be instructive if you ran Windows Task Manager continuously from before you started the task until the time you have the first failure. I don't have a Win10 available, but on Win11, if you put the WTM on Performance and select the CPU option, one of the read-outs is the number of threads. Another is the number of handles.

It might be instructive to watch threads and handles as this application continues to process files. It is entirely possible that you are REALLY running out of some resource, but for some reason you get that "file does not exist" message, perhaps when the file handles start to really spike. I would think the number of threads should more or less stay the same but the number of handles might not. If that happens, you might be running into - as mentioned - a memory leak. Another place where before/after monitoring might reveal a leak would be on the Memory page, where you would be looking at paged and non-paged pool, plus the committed and in-use memory amounts. Ideal behavior would be that you get X usage for the various parameters before you start the app. Then you run until your problem shows up. Take readings again. Then exit the app and see that the memory usage and the file handles go back to their values pre-usage.

If we know how you do the import, we might be able to give you hints about how to minimize the impact of what you do by judicious closure of various file objects. Like Pat, I think this smells like resource mismanagement. Don't know if I would go out on a limb as a memory leak vs., say, failure to deallocate a resource when done with it. But the EFFECT of either situation will LOOK like a memory leak issue. If Pat is right, your only recourse will be to limit the number of times you do your import before you exit and restart the app. If I'm right, simply remembering to explicitly close something before you re-open it might be all that is needed. But we need clarification before knowing what's up here.
 
Remember that a saved set of “import steps” is not the sam as an saved "Import/Export specification"
 
OK - Thanks all for the info!
I will try a reboot tomorrow. And I will try some alternate methods as well.

I don't think it can be a syntax issue since the exact same syntax worked - and then didn't work. As far as the distinction between import specs and the so-called "saved" thing - I've never bothered with that new save method. (I can't think of a scenario where it would help me.) I only ever establish Import Specs and then refer to those from code.

The idea of it being a memory-related issue does indeed feel right - since it did work for a while - and then - not!

Originally - I tried to use a method I've used before - which is to establish a linked table (to a given .csv file), and then simply change the connect string and refresh the link. But No - for some magical reason, it worked before (when the import spec was delimited), but not now (when the import spec was fixed). I was going to ask for help on that problem (which was that the connect string was REQUIRING truncated folder names in the DATABASE= path - I can't get my head around that one...), but I saw that one person recommended switching to simply deleting the linked table altogether and re-attaching to the next file. (The more I ponder this - the repeated connect/delete is indeed likely causing some kind of memory-related problem and is what upset the apple cart.)

I was planning on trying the Query method - where the connect string is buried inside the query, and then just change THAT connect string every time... but I suspect that this will not work for the same reasons that trying to change the linked table connect string didn't work. But we will see. I will give it a try anyway.

I'm basically using GDAL to generate metadata files for hundreds of files (in this particular case, over 2300 of them), and then parsing those outputs to ingest, store, and use a subset of that metadata to process the files themselves (like #rows, #columns, NoData value - that kind of thing.)

I really appreciate the great responses on this forum --- Here's hoping I can find a way to reliably process these files.
Thanks again!!
 
I was going to ask for help on that problem (which was that the connect string was REQUIRING truncated folder names in the DATABASE= path - I can't get my head around that one...), but I saw that one person recommended switching to simply deleting the linked table altogether and re-attaching to the next file. (The more I ponder this - the repeated connect/delete is indeed likely causing some kind of memory-related problem and is what upset the apple cart.)

Considering that Access connects things by name, ... If you modify the connect string (change the name), you implicitly delete and re-connect anyway. Windows does that because of having to keep the file locks properly synchronized to the file being locked at the moment. Either method of changing files is equally likely to lead to memory issues.

As a possible way to raise whatever limit you are encountering - if it IS a limit, look at this thread, which diddles with the "max locks per file" number as a way to evade hitting file limits quite as quickly.


I wasn't familiar with GDAL so I appealed to the great Google brain. Are those some type of graphics files?
 
Considering that Access connects things by name, ... If you modify the connect string (change the name), you implicitly delete and re-connect anyway. Windows does that because of having to keep the file locks properly synchronized to the file being locked at the moment. Either method of changing files is equally likely to lead to memory issues.

As a possible way to raise whatever limit you are encountering - if it IS a limit, look at this thread, which diddles with the "max locks per file" number as a way to evade hitting file limits quite as quickly.

I wasn't familiar with GDAL so I appealed to the great Google brain. Are those some type of graphics files?
Thanks for that idea too. There have only been a couple times in the past where I ended up changing that maxlocksperfile thing. I don’t even remember why.

Yes, the GDAL I’m referring to is indeed for processing graphics files (Geotif files - in this case, with topographic/terrain data.) Among other things, you can run a command to generate an XYZ file which then allows you to easily pull the lat/lon/elevation values.
 
Just wanted to follow up on this a bit.

Upon rebooting, and adding some code to clear recordsets everywhere possible, it did successfully finish - although it only had roughly 550 files to go. When the problem first occurred, I think it had reached somewhere around 1800 files processed. So, I'm not sure I changed anything to actually fix the problem. I'll know when I summon the courage to try a rerun on that big set of files.

Again - all the ideas and help --- very much appreciated!
 
Break it down into smaller batches?
 
I've never bothered with that new save method. (I can't think of a scenario where it would help me.) I only ever establish Import Specs and then refer to those from code.
I'm with you. Isladog has written extensively on what you can do with code to overcome the inability to change the spec in the "new" method. I tend to not change my methods unless the new option actually makes my process easier or provides additional features that I want to use. This was one of those change for the sake of change changes to hide a little complexity from the new user.

Just take leap and run the job that failed. I like Doc's idea of keeping Task Manager open so you can monitor the resources. But if you had open files or other objects that you have now closed, that may have fixed the resource issue. I said memory link but Doc is right, it could be any kind of resource limit although the error is a little strange. But we can not always tell how a hard limit manifests itself.
 
And to amplify Pat's comment, we ALSO cannot always tell from the error message what actually failed, since sometimes file-system things are buried in layers of subroutines and the error that gets signaled might depend on exactly which subroutine caught the problem. That's what happens in a layered model of error handling.
 

Users who are viewing this thread

Back
Top Bottom