To the question of how big MaxLocksPerFile can get, it is a DWord, so could become quite big. maybe as high as a billion. But that runs into the VERY significant issue that your locks might cause you to exceed the size of your machine's virtual memory limit either for your app or for the system itself (and I'm thinking the latter because file sharing locks are typically held by the SYSTEM and not the user). A file lock is a data structure in Windows that is associated with the file handle (fandle) and it has to reside in memory until you can close the thing that has the lock. So allowing them to accumulate eats memory and risks crashing the system or locking it up so badly that it needs a three-finger salute.
Based on the online solutions I'm seeing, the better solution will be to find another way to do this - but in reviewing the original thread, I see that this was my own idea. What can I say? Some of my ideas are better than others. It happens!
Seriously, perhaps there is another way within the recordset. Is this something that could perhaps be done with recordsets defined for dbOptimistic? I don't know (and so far have not found online) the explanation for dbOptimisticBatch, but I'm pretty sure you want to do whatever you can to not lock the record any longer than needed.
Another thought comes to mind. If this is a dedicated query, perhaps you don't care if you lock it at all via SQL (because you would be managing locks in the recordset). Can you perhaps set the query to "no locks" and trust that the recordset's locks will be enough?
In this thread I saw a comment about using essentially an On Error block that does a Resume in certain cases, but it makes everything 5 times slower. This is because error handling triggers a trap which forces a context switch to allow your interpreted trap handler code to test and dismiss errors. That context switch ain't cheap because it involves stack manipulation AND CPU mode changes from USER to KERNEL and back again (mostly to muck the stack), then at least briefly another USER/KERNEL swap when you do the RESUME that collapses the hardware stack to help you reset your context to the code you were running when the error trap fired.