Getting file sharing lock count exceeded error (1 Viewer)

sxschech

Registered User.
Local time
Today, 10:31
Joined
Mar 2, 2010
Messages
792
Another thing that can cause the problem is certain virus checkers. We had several computers all bought at the same time, all loaded with same software, except that for some reason, mine did not get errors and the others did. Eventually discovered the only difference was that my computer never had the full version of Norton installed and the others did. When they turned off Norton, things operated smoothly.
 

speakers_86

Registered User.
Local time
Today, 13:31
Joined
May 17, 2007
Messages
1,919
I think I missed something, I thought you all determined this was being caused by the records being released asynchronously. Is that not the verdict anymore? I was just about to go and add a timer to allow records to get released, but now I'm not sure.
 

sneuberg

AWF VIP
Local time
Today, 10:31
Joined
Oct 17, 2014
Messages
3,506
I think I missed something, I thought you all determined this was being caused by the records being released asynchronously. Is that not the verdict anymore? I was just about to go and add a timer to allow records to get released, but now I'm not sure.
That's still the verdict as I see it, but why slow things down with a timer or error handler when moke123's solution or

Code:
rs.MoveLast
x = rs.RecordCount
rs.MoveFirst
DAO.DBEngine.SetOption dbMaxLocksPerFile, x

seems to work fine, though I had to add to x (added 10) to get it to work in my last test database.
 

sneuberg

AWF VIP
Local time
Today, 10:31
Joined
Oct 17, 2014
Messages
3,506
It seems to work for arnelgp but I can't get that solution to work for me on either my computers. The follow code does that and nothing else to avoid or fix the problem and is in the attached database. Would you run the test and see if it works for you. Let us know the results and what version of OS and Access you have.


Code:
Public Sub UpdateTable1()

Dim rs As DAO.Recordset
Dim lngRecCount As Long
Dim i As Long
Set rs = CurrentDb.OpenRecordset("Query1")

Do While Not rs.EOF
    rs.Edit
    rs!F1 = 5
    rs.Update
    rs.MoveNext
    lngRecCount = lngRecCount + 1
   ' manually release locks every 5,000 records
    If lngRecCount Mod 5000 = 0 Then
        DBEngine.Idle dbFreeLocks
    End If
Loop
rs.Close


End Sub
 

Attachments

  • dbFreeLocksTest.zip
    1.2 MB · Views: 126

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
27,163
To the question of how big MaxLocksPerFile can get, it is a DWord, so could become quite big. maybe as high as a billion. But that runs into the VERY significant issue that your locks might cause you to exceed the size of your machine's virtual memory limit either for your app or for the system itself (and I'm thinking the latter because file sharing locks are typically held by the SYSTEM and not the user). A file lock is a data structure in Windows that is associated with the file handle (fandle) and it has to reside in memory until you can close the thing that has the lock. So allowing them to accumulate eats memory and risks crashing the system or locking it up so badly that it needs a three-finger salute.

Based on the online solutions I'm seeing, the better solution will be to find another way to do this - but in reviewing the original thread, I see that this was my own idea. What can I say? Some of my ideas are better than others. It happens!

Seriously, perhaps there is another way within the recordset. Is this something that could perhaps be done with recordsets defined for dbOptimistic? I don't know (and so far have not found online) the explanation for dbOptimisticBatch, but I'm pretty sure you want to do whatever you can to not lock the record any longer than needed.

Another thought comes to mind. If this is a dedicated query, perhaps you don't care if you lock it at all via SQL (because you would be managing locks in the recordset). Can you perhaps set the query to "no locks" and trust that the recordset's locks will be enough?

In this thread I saw a comment about using essentially an On Error block that does a Resume in certain cases, but it makes everything 5 times slower. This is because error handling triggers a trap which forces a context switch to allow your interpreted trap handler code to test and dismiss errors. That context switch ain't cheap because it involves stack manipulation AND CPU mode changes from USER to KERNEL and back again (mostly to muck the stack), then at least briefly another USER/KERNEL swap when you do the RESUME that collapses the hardware stack to help you reset your context to the code you were running when the error trap fired.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
27,163
For what it is worth, a MaxLocksPerFile of 9500 (e.g. as noted by ArnelGP) is the default value from Microsoft.
 

Simon82

New member
Local time
Today, 19:31
Joined
Aug 26, 2021
Messages
2
Hi everyone, I experienced same problem and found this post as a great resource of information, so thank you all.
Anyway I found a different solution by enclosing the Edit/Update block inside an explicit transaction, as follows:

Code:
...

        DBEngine.BeginTrans
        rs.Edit
        rs!PreviousValue = varPreviousValue
        rs.Update
        DBEngine.CommitTrans

...

This forces lock release at each record level and has no significant performance degradation. You could also decide to commit transaction each #N records depending on your needs.
Hope this can help other people.
You could also decide to trap any error inside the transaction block by specifying an On Error Go To clause and invoking DBEngine.Rollback inside the error handler.

Other solution, if the database is always accessed by one person at a time, can be to open it in exclusive mode, thus avoiding any record locking issue.

Thanks again!
S.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Today, 12:31
Joined
Feb 28, 2001
Messages
27,163
I like your "Transaction" idea, @Simon82, and it makes a certain type of sense. The transaction "blocks" other DB activity so it is a different class of lock, which would be treated differently.

Not going to swear that opening in Exclusive Mode avoids the issue. There have been discussions about the fact that the recordset involves granular file access, which gives the file system the option/excuse to take out a lock anyway. Have you tested, this Exclusive-Mode solution?
 

Simon82

New member
Local time
Today, 19:31
Joined
Aug 26, 2021
Messages
2
Hi @The_Doc_Man, yes, in my case, opening Access in Exclusive Mode avoided the issue, but I preferred a vba-based solution to avoid any action to be taken from client side.
 

Users who are viewing this thread

Top Bottom