Ah, simple enough. Let's use a recordset for the discussion.
The letter clearly suggests that ANY object on a disk will be managed through Windows file locks and implies that the locks are per buffer as opposed to per file. A recordset object refers to the contents of a table directly or through a query. That set of records resides on a disk so will be the point of corruption if this error occurs.
For Access, the mechanism of corruption is almost always the same though often from various different first steps. You corrupt a database when you start making changes to that database but don't finish the process. This leaves the database in an inconsistent, half-updated state. The "inconsistent state" error message we have been discussing is quite literal here.
Based on what I read in the two letters, this inconsistent state occurs when Access opens a file for sharing BUT Windows in its infinite (?) wisdom takes out a non-shared lock on a buffer AND sets up one of the new buffering optimization schemes.
In older versions of Windows, the SMB protocol would write back buffer contents every time you updated the buffer, even if your next operation was in the same buffer. In SMBv2 and v3, though, they have an option to NOT write that back immediately. They do that to reduce network traffic.
When you have a recordset open and there is a currently selected record, the active copy of that record resides in a memory buffer corresponding to the place on the disk where that record resided BEFORE you opened the recordset. So you have this recordset and do some sort of .Edit/.Update sequence and the active buffer gets updated in memory. The new protocols, however, allow that buffer to NOT get updated to disk right away.
The optimizer cannot tell why you pause after your most recent update to the recordset. So it does nothing. That new optimization doesn't know that you are done with the recordset object until you CLOSE the recordset and start to destroy it. At that point, the lock and buffer optimization must flush the modified buffers back to the disk. This is where your "destroying (or not)" issue kicks in.
IF before this point of closure, another thread or a different part of your own process thread try to touch that buffer again, you have to unlock it. According to what the letter and linked articles describe, the buffer can be invalidated before it gets written back. If you were doing a sequence of updates and the last buffer doesn't get written back correctly, you now have a corrupt database.
The way to avoid that is to CLOSE whatever object you opened and set it to Nothing. That action would flush the disk buffer and destroy the object. That will prevent the corruption.
Did that help you see the mechanism any better?