Better to keep all tables in the same database (be vs. fe)? (2 Viewers)

Lightwave

Ad astra
Local time
Today, 16:19
Joined
Sep 27, 2004
Messages
1,521
Good point but the make table query can't write tables in split ends...

So there may be situtations where writing local is required.
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
Make table will create a table in another database.

However temporary records are better written in append queries to existing tables which have been cleared first with a delete query. This allows the field type to be specified which can't be done in a make table query.

Also continually erasing and writing tables will build up the count of tables in the history of the database. Object Indexes only operate in a range from 0 to some finite number. I have read somewhere the limit somewhere in the 700s and after that the objects need to be exported to a new database before being able to new objects.
 
Last edited:

Lightwave

Ad astra
Local time
Today, 16:19
Joined
Sep 27, 2004
Messages
1,521
My mistake useful to know I hadn't thought about that

Good to know about problems before they arise!
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 16:19
Joined
Sep 12, 2006
Messages
15,657
Make table will create a table in another database.

However temporary records are better written in append queries to existing tables which have been cleared first with a delete query. This allows the field type to be specified which can't be done in a make table query.

Also continually erasing and writing tables will build up the count of tables in the history of the database. Object Indexes only operate in a range from 0 to some finite number. I have read somewhere the limit somewhere in the 700s and after that the objects need to be exported to a new database before being able to new objects.

does that mean there is a limit to the number of times you can import a csv or spreadsheet to a new table?
 

Banana

split with a cherry atop.
Local time
Today, 08:19
Joined
Sep 1, 2005
Messages
6,318
Unfortunately, yes. The counter get incremented regardless of what the object is, if I remember correctly. I've had seen other posts about problems with creating new objects which later turned out to be the fault of having code continually creating new objects on a regular basis that the 768 (that's the number I have in my head- don't quote me on this) limit is quickly reached.

Yet another reason to why one should just truncate & reload.
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 16:19
Joined
Sep 12, 2006
Messages
15,657
does compacting help?

does that replenish the counts?
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
When importing data I link the file. Subsequent files are simply moved into that location using a DOS batch. I like dos because I have always used it. A batch can easily be scheduled while the db is closed. I also have preprocess the file to make it suitable for import. The FOR /F command can be used as a powerful fastworking line parser.

Of course the file cannot be replaced if the database is accessing the file. To test for this in the dos batch I comand it to copy the linked file to itself. If the file is locked dos returns a message as such. If the file is not locked it reports a file cannot be copied to itself. Parsing the error message reveals the state of the file and inserts a warning into my preimport log file which is displayed in Access.
 

Banana

split with a cherry atop.
Local time
Today, 08:19
Joined
Sep 1, 2005
Messages
6,318
No, compacting will not replenish. The increment, AFAICT, is permanent and the only solution is to import all existing objects into a new database file.

A similar problem exists for the controls on the form- there has been cases of people posting they have a code dynamically creating & deleting controls and thus breaking their database after some time when the increment reached its maximum.

I'm inclined to say that in general it's best to use existing objects and modify them to suit rather than creating temporary objects & deleting.

I suppose that is also a good case for a side end as well so you never need be concerned with such problems in the front-end if importing is a common part of workflow.
 

Banana

split with a cherry atop.
Local time
Today, 08:19
Joined
Sep 1, 2005
Messages
6,318
I just noticed that my thoughts about Side End hasn't made it to here, so here it goes...

I thought long and hard about Galaxiom's very interesting comments. It was quite puzzling because of his claim that it's a fundamental programming rule that a file should not write to itself. While I agree this is a good rule, I couldn't help but wonder why it wasn't stated earlier.

Thinking about it some more, I can say I have seen many experienced developers and myself always recommending splitting the file and this is in some principle, adhering to the idea that the file should not write to itself. But looking at the basic design, Microsoft chose to use a one-file solution in which the file would contain both the application parts and data parts. Certainly that was very convenient for users who weren't serious developer and needed a simple mean of quickly sharing data with other users (e.g. copy a file on the floppy disk drive and hand it to the other user... remember those? My gracious goodness.) Further thinking about it, I then realized that technically it was the MSACCESS.EXE that was running, and manipulating the .mdb or .accdb file. Certainly, it was reading from the file for the applications parts but nonetheless it was still the MSACCESS.EXE that was in charge, which partially explains why it has been "safe" to write to the active & open .mdb, in much like manner how we can write and save changes to a .doc file with WORD.EXE.

But what is the real reason for splitting? We all know that we recommend splitting for purposes of supporting concurrency and providing some level of stability. We don't want to waste scarce network bandwidth on downloading (and re-downloading) forms & reports & VBA when it could have been right there on our desktop and concentrate on only getting the data. It should be also noted that by keeping the application part on the desktop, we are minimizing the risk by having them in a more reliable context... It's much more likely that something bad will happen in a network cable than on the motherboard bus or the hard drive head. It's just a fact of the system. IP is an unreliable protocol that provides no guarantees of what order the packets will leave and arrive in, whether all packets will actually reach their destination and whether it will not be altered in transit. Those problems are simply the nature of having to deal with electrical interference, noise, distance, faulty cabling or bad plugs among other possible factors. We try and compensate for this with TCP, but we must be aware that though TCP tries to hide the fact that we're using an unreliable protocol by counting the numbers of packets to be receiving, checksumming them and not passing them to the receiving application until all packets have arrived, re-ordered into correct order and passed the checksum, this still leaks through to us as a slow than usual network activity. Now to be completely honest, I'm not very sure as to exactly how corruption would arises anyway even with TCP's reliability check (actually, I'm assuming TCP is being used when we use linked backend across a local network... Anybody know for sure?) but I've usually heard of corruption occurring in such context far more often than any other context (such as having a bad sector on the hard drive).

Anyway, the whole point of this is that by splitting the files, we are reducing the risk of corruption due to an unreliable protocol while facilitating sharing the file among several people. After all, it wouldn't be very practical to share the file from your desktop, no?

Now circling back to the side ends. We were discussing about using it to hold the temporary objects & data rather than using front-ends which should not change while in use. I suspect that if it's MSACCESS.EXE that's doing the writing, it really isn't violating the rule of 'file shouldn't write to itself', so we should consider what really causes corruption whenever we write to the front-end on a local hard drive. Off the top of my head, the most common answer would be having front-end continually creating & deleting objects which is considerably more messy than if we were to truncate & reload or modify the existing objects to suit our need. Next problem would be associated with compacting on the close... while I've not personally experienced this, I've seen other posters recommending against compacting on close... on some principle, it is understandable for the same reasons we shouldn't decompile/recompile at every slight problem for which it may actually cause more harm than help. Thinking some more, in my case, my temporary tables in the front-ends were mainly lookup tables that get updated once a year or maybe at most three times a year. Very infrequent, and certainly less so than updates I do to the GUI and re-distribute the front-end. Would I have a different attitude had I been required to make use of a file import as a daily workflow? Probably, yes.

So, all in all, I am inclined to think that side end is a good idea and should be used when the frequency of managing temporary objects is high enough that it would create a significant bloat. Otherwise, there is nothing wrong with using front-end to house the temporary objects for the convenience providing that the frequency is few and far between enough that the front-end gets updates more than it does with its temporary tables in which it becomes a moot point.

Would be happy to hear anyone's take on the cause of corruption and merits of housing temporary data...
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
Excellent insight Banana as always.

If TCP/IP is the protocol used on the network then that what Access is using. I would only add that when a transmitting computer doesn't receive an acknowledgement for a packet it waits twice as long to send it for each try quickly leading to a very long wait.

Technically the mdb is notwriting to itself but rather msaccess is doing the job. However Access is following instructions from the mdb. Hence we still have a potential loop where corruption of the file that holds these instructions could result in a "lifting by the bootstraps" failure. The side end is not difficult to achieve and there are benefits regarding compacting and the object index limit. I think there is no doubt that use of the side end, particularly for temporary data is best practice.

Compare the situation in the old information paperhighway. The mdb is a reference book. While one might write notes in the reference book margins with further information you would not jot your calculations or shopping list.
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
As the risk of being tried for heresy.

Presumably sharing of a front end causes corruption due to conflicts in writing information back to the mdb. If all writing to the FE was eliminated and each user had a separate Side End would the problem with sharing an FE evaporate?

What other mechanism might there be that would cause corruption of a shared front end. Many other executables will happily run multiple sessions so long as they don't write anythng to their home folder that would clash with other users.

Of course the unnecessary network traffic of loading the FE from the server every time should be considered. Besides the management of the FE distribution is easily managed by one of the free tools mentioned here regularly. I am not about to change my practices but I do wonder if the central culprit in the problem leading to the prohibition of shared FEs has not yet been precisely identified.

Can an mdb Front End be shared if set to Read Only so all writing to the front end is excluded?
 

Banana

split with a cherry atop.
Local time
Today, 08:19
Joined
Sep 1, 2005
Messages
6,318
Excellent insight Banana as always.

Thank you. Your questions were good and important to think about. I'm quite glad for this.

If TCP/IP is the protocol used on the network then that what Access is using. I would only add that when a transmitting computer doesn't receive an acknowledgement for a packet it waits twice as long to send it for each try quickly leading to a very long wait.

That is true. As I said, IP is unreliable and TCP tries to hide it but we still feel the effect which is manifested in slowness & long waits and possibly even worse things such as timeout or dropped connection. Now that reminds me - Access does not like wireless clients because it has no good mechanism for handling dropped connection. The fuzzy understanding I have is that when it writes to a .mdb file on a hosted folder, it creates a .ldb file to record its activity, right? Now if the connection drops, Access may reconnect but still sees the .ldb file there and decides that there is a conflict or something? Corruption is almost inevitable when we have a wireless clients, and TCP's reliability cannot save us from this.

Technically the mdb is notwriting to itself but rather msaccess is doing the job. However Access is following instructions from the mdb. Hence we still have a potential loop where corruption of the file that holds these instructions could result in a "lifting by the bootstraps" failure.

I have to agree. The potential for irresolvable loop is still there even if we are using MSACCESS.EXE which is executing instructions from the .mdb file. This probably could be avoided by careful programming, but why take excessive risks? It's kind of the same idea of saying that you shouldn't use Access's relationships and use your own VBA code to protect RI. Sure, it may works but why bother re-inventing the wheel?

The side end is not difficult to achieve and there are benefits regarding compacting and the object index limit. I think there is no doubt that use of the side end, particularly for temporary data is best practice.

I don't doubt that if reliability is particularly important, it will make sense to use a Side End. Unfortunately, I can't say it is always the best solution- I've already cited an example of where I did create a Side End in a Citrix environment and suffered performance problems than if I just had a single Front-End to hold both application data & temporary data used for lookups. forunately, since Citrix is already set up to give users a disposable front-end the process is quite transparent and since it's always one-use, the chances of corruption is very low. Also technically, users don't get to edit the temporary data- it's administered by an admin. I only used local tables primarily for performance reasons, to enable fast lookups so the application can send in effective requests to the SQL Server for generating reports.

But going back to the point- would I dispute that Side End is _generally_ a good solution? No, not at all. I think it does makes a lot of sense to have a separate scratch pad.

Compare the situation in the old information paperhighway. The mdb is a reference book. While one might write notes in the reference book margins with further information you would not jot your calculations or shopping list.

Nice analogy. It also goes to shows that 'writing in margins' would be the case in my Ciitrix example, but yes, if the writing amounts to writing out calculations or shopping lists then, that's just bizarre and leaves us with a dirty book.

We should also be aware that is also how we ended up with Fermat's last theory. For anyone not familiar with this, Fermat was a mathematician long back and reading someone else's treatise on mathematics, on a chapter discussing whether this:
Code:
x^a + y^a ≠ z^a

where x, y, z are nonzero numbers and a is greater than 2.
basically a variation of Pythagorus theory could be proven. Fermat wrote in margins, "I have found a remarkable and simple proof for this proposition. But the margin is too small to contain the proof."

Few days later, he died. Everybody (well, mathematicians, really) were aghast and scurried to re-discover the 'remarkable and simple proof' that Fermat had claimed to have discovered. It would not be re-discovered for several centuries (only as recently as 1970s, if my memory serves.)

So, writing in margins could give us another unfortunate case of Fermatism!

Presumably sharing of a front end causes corruption due to conflicts in writing information back to the mdb. If all writing to the FE was eliminated and each user had a separate Side End would the problem with sharing an FE evaporate?

I'm inclined to say no, though I would not disagree that the risks would be minimized.

What other mechanism might there be that would cause corruption of a shared front end. Many other executables will happily run multiple sessions so long as they don't write anythng to their home folder that would clash with other users.

The real question lies in how Access is designed. It's easy to find a executable and claim that it supports concurrency wells, but we must carefully consider whether the design are in fact similar enough. For starters, it would be useless to compare Access with say, SQL Server, MySQL or Oracle because Access does not have daemon as the server-RDBMS have which significantly simplify the logistics of managing data files. The daemon does all of this. Access's architecture was to be simple enough that you don't have to be sure that a daemon is up and running... this is not something that end users is accompanied to. OTOH, we may have better luck in comparing say, SQLite or SQL Server Compact Edition of which both does not use a daemon & runs in-process with the calling application just like Access (e.g. MSACCESS.EXE & msjet40.dll)

The question is further complicated by that Microsoft decided to have a single file to house both application and data. I do not know of any other executables that has such file structure. Alpha Five and FileMaker for examples usually use multiple files- FileMaker uses one file per table and thus it is common for users to open several FileMaker files in a single session. Alpha Five creates several files... I suspect on a a-object-to-a-file ratio or something close to it.

A tangent: I really think it would have had significantly simplified the matters had Microsoft adopted the idea of "package". In Mac OS X, if you open Applications folder, you get a nice list of actual applications... Safari.app, TextEdit.app, Terminal.app. Double-click and the program runs. But they are not single file! They are actually folder in disguise- if you right-click on those .app files, there's option to "Show Package Contents" which then exposes the interior structure including many libraries files, frameworks, resource files and the binaries. This is very clean and slick way of hiding the ugly details and keep them in a logical place. Had .mdb file adopted this, we would always have a split application (just copy the data file out of the .mdb package into a new .mdb to share it), as well removing the 2 GB limit on the whole application to just 2 GB per object and many other benefits. This would be a good way to eat the cake and having it. But I digress.

Back to the question... I already cited the possibility of bad blocks on the hard drive, but we should also consider the already-mentioned failure by bootstrap as potential contributor to the corruption. Another candidate I would want to look at is user's VBA code. I've had unfortunate experiences and seen other posting about how a poorly thought out VBA code could trip over itself and thus creating a undesirable state of affair.

Of course the unnecessary network traffic of loading the FE from the server every time should be considered. Besides the management of the FE distribution is easily managed by one of the free tools mentioned here regularly. I am not about to change my practices but I do wonder if the central culprit in the problem leading to the prohibition of shared FEs has not yet been precisely identified.

To be quite honest, I would be surprised if the problem wasn't already identified after so many years and by so many people. I've usually attributed this to two things: 1) architecture designs which means trading off this for that, not unlike any engineering work. Sure you can add more materials to make a product stronger but it's unlikely that you can do so without increasing the weight and cost of the product. 2) corruption by definition is imprecise in nature. What is a corruption, exactly? Bad or junk data. Okay. Define bad or junk data. Something not entered by the operator even in error. Okay, what else? ... Basically, we end up with a negative definition of corruption, e.g. what it is not rather what it is and therefore it can encompass many different aspects and may manifest similar symptoms for variety of causes.

I guess it may help us to find out whether the corruption is a problem with FileMaker or Alpha Five to give us some kind of frame of reference.

Can an mdb Front End be shared if set to Read Only so all writing to the front end is excluded?

I've actually wondered about this. Could we set up the Windows permission to grant read only rights? I vaguely recall seeing a post on the newsgroup suggesting that while it could be done, it creates more problem than it resolves, but that's no good- my fuzzy recollection is scant an evidence in support or against the proposition. I'll see if I can find it and see what the context was.
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
Thanks for that in depth analysis Banana. The diversion with Format's Last Theorem was a nice touch.
 

Rabbie

Super Moderator
Local time
Today, 16:19
Joined
Jul 10, 2007
Messages
5,906
2 more cents wort on the subject of a file writing to itself. When I started working as a programmer in the dim and distant past the mainframe we used only supported loading each program as a single file which contained not only the executable code and constants but also the workspace for the program.What made this acceptable was the fact that we did not write the file back to disc so it remained uncorrupted. Of course the actual output was written back to different files.

So using this as an analogy I do not see a philosophical issue with storing temporary tables in the FE provided it is not a shared FE. Though it woul be better to store them in a separate DB stored locally. This reduces the amount of network traffic which must have performance benefits

PS Also enjoyed reference to Fermat's last Theorem:) According to wikipedia not finally proved till 1995
 

gemma-the-husky

Super Moderator
Staff member
Local time
Today, 16:19
Joined
Sep 12, 2006
Messages
15,657
but access doesnt write to itself

access is msaccess.exe

access is writing to a .mdb file or similar, which (in conceptual terms at any rate) is no different to writing to a .doc file, or (more comparably) a .xls file

access clearly has a sophisticated structure (as does excel) to support both table structures AND data, but thats all, surely

----------
re side ends

one of the problem with shared front end appears to be the management of local tables - so that one user may overwrite another users local data - it doesnt seem to me that using a side table changes this at all
 

dfenton

AWF VIP
Local time
Today, 11:19
Joined
May 22, 2007
Messages
469
I think there's some odd stuff in this thread.

1. while most everybody today uses TCP/IP networking, at the time Access was created, TCP/IP networking in environments where Access was being used would have been relatively rare. The most common networking protocols back then would have been NETBIOS and Novell's IPX/SPX, and they were both structured very differently so any discussion of the way TCP/IP works is really tangential with the way Jet interacts with the underlying networking protocol. Indeed, Jet does *not* interact with the networking protocol -- Jet knows nothing at all about networking. It works just exactly the same with a file on your local hard drive as it does with a file on a server. AV software has been known to interfere with Jet and lead to corruption even for files stored on the local hard drive, so the issue is not networking but anything that interrupts Jet's communication with its own files. Specifically, Jet needs to ping the LDB file every second or so in order to maintain the locking on the file. This is the case even when there's only one user. Anything that blocks the LDB ping for enough time to exceed whatever Jet's internal timeout is will give you the unrecoverable DISK OR NETWORK ERROR. But it's also the case that interrupted or corrupted packets can get through and corrupt the file without the client computer knowing this, or some other user can corrupt a shared file and this still not cause problems for other connected users (likely because the local temp files don't include the data pages where the corruption actually happens).

2. whether or not your application makes changes to your front end, Access makes such changes, so I think the "avoid having a program write to itself" is a red herring. You can't under any circumstances avoid that, except by making the front end read-only, and I don't think the inconvenience of doing that is worth it, since the risk is very small.

3. Jet objects can be much more safely shared than Access objects, even though both data tables and Access objects are all stored in Jet tables. The difference is that since A2000, the entire Access project is stored in a single BLOB field in an Access system table. Previous to that, each Access object was stored in its own record in a system table. This is why before A2000 you could save each object individually, without needing to deal with the multiple-object save dialog. In A2000, since you're writing to a single field in a system table, you have to decide if you're going to save all the objects that have been edited or not. What this means for sharing a front end is that before A2000, sharing a form meant sharing a single record for each form, so the locking was much more granular. Thus, corruption in a shared front end was much less likely. In A2000, you're sharing a single record with data that has a very complex internal structure, and that is a much larger chunk of data. Thus, the possibility of corruption skyrockets with the monolithic save model.

4. I don't find the "side end" concept useful at all. The "side end" is just an additional back end, one that is stored locally and not shared. It's still a back end, particularly when all it has in it is data tables. I, for one, don't believe there's any utility at all in the practice of storing lookups locally. Jet caches data so incredibly efficiently that all you're really accomplishing is making the initial retrieval of the lookup data faster. I don't believe the minuscule benefit from that is even close to being sufficient to justify the overhead of maintaining the local copies of the lookups, so I keep lookups in the back end.

5. This is not to say I have no local tables in my front ends. For the few apps that use a Switchboard, the switchboard table is in the front end because while it's a data table, it's purpose is to support a UI object, and thus it belongs in the UI layer of the applicaiton (not in the data store). I also often use a "reports switchboard" as a centralized location form which to print reports, and this also has a table local to the front end. Additionally, in some apps I use some graphics for display indicatores, and because graphics are relatively large amounts of data, and because the purpose of the graphics is to display a visualization of the meaning of the data (i.e., UI), I keep these graphics in local tables in the front ends. But none of these tables are ever altered at runtime. They are only ever altered by me during the development process, and then those changes are distributed when a new front end is distributed. I would argue that there are no circumstances under which any front-end objects should be modified at runtime (except insofar as Access itself is altering them behind the scenes, e.g., saving sort/filter properties). So, all UI components belong in the front end and are never altered at runtime, and all data components belong either in the shared back end or in a local back end (limited to temp data specific to the current user, in my opinion).

6. the problem with WiFi is that the connection itself is so unreliable, not that the protocols in use are less reliable. If you had a 100BaseT Cat5 wire running under the rail of the rocking chair you sit in while working at the computer, you could end up with just as unreliable a network connection as with WiFi, since rocking back you could short out one of the wires and break the connection. Network cables that run past electrical components that produce lots of interference can also introduce enough noise into the network signal that the connection is lost, and that, too, can lead to dropped connections, just as with WiFi. The real problem with WiFi is that it shares a radio spectrum with a lot of unregulated devices that can interfere with the WiFi connection. These include baby monitors, cordless phones and in the UK (as reported by The Register), devices to repeat TV signals from one room of the house to another. Microwave ovens and other appliances and devices can also produce interference that interferes with WiFi. The in-and-out nature of WiFi is not so much of an issue with Internet traffic (even streaming protocols, which have buffering built in, because it's assumed the connection won't be always-on every single second) because those connections are stateless as well as not being important enough to matter (if a web page doesn't load, you just do a refresh and try again).
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
one of the problem with shared front end appears to be the management of local tables - so that one user may overwrite another users local data - it doesnt seem to me that using a side table changes this at all

It is a matter of the address of the side end. Access is running locally no matter where the front end comes from, so provided a local address is used there would be no conflict between users.

My preference for a side end location is
%LocalSettings%\Application Data\myappname
which normally resolves to:
"C:\Documents and Settings\username\Local Settings\Application Data\myappname"

It is available to users with typical security settings and is not normally included in a roaming profile.

I prefer user settings on the server but in a roaming profile they could be kept in:
%ApplicationData%\myappname
which resolves to:
"C:\Documents and Settings\username\Application Data\myappname"

This is normally roamed so becomes available locally on any computer in the domain when the user logs in. It is also the easiest place to keep the front end when using an FE updater.

However with appropriate permissions and a RunAs environment used in the distributor the FE can be placed in %ProgramFiles%. This avoids a large front end bloating the roaming profile.

Note that it would be advisable for security's sake to clean out any sensitive data from the side end immediately after use. In the case of a Local Settings side end on a roaming profile the side end needs to be regenerated each time the user logs on since it is deleted at logoff.
 

Banana

split with a cherry atop.
Local time
Today, 08:19
Joined
Sep 1, 2005
Messages
6,318
David, thanks so much for chiming in. Much appreciated.

1. while most everybody today uses TCP/IP networking, at the time Access was created, TCP/IP networking in environments where Access was being used would have been relatively rare. The most common networking protocols back then would have been NETBIOS and Novell's IPX/SPX...

Does this remains to be the case even today?

Indeed, Jet does *not* interact with the networking protocol -- Jet knows nothing at all about networking.

You are right- I should have had worded it better since JET depends on OS to do the file sharing, and as I admitted earlier, I'm a bit fuzzy on exactly what protocol is being used in this context. What I'm really interested in knowing how corruption can arise anyway.

It works just exactly the same with a file on your local hard drive as it does with a file on a server. AV software has been known to interfere with Jet and lead to corruption even for files stored on the local hard drive, so the issue is not networking but anything that interrupts Jet's communication with its own files.

I didn't know that AV software could do that. Thanks.

Specifically, Jet needs to ping the LDB file every second or so in order to maintain the locking on the file. This is the case even when there's only one user. Anything that blocks the LDB ping for enough time to exceed whatever Jet's internal timeout is will give you the unrecoverable DISK OR NETWORK ERROR.

Interesting. I'm not sure I understand the reasons for needing to ping the file every second. Seems to me it would make more sense to check .ldb file based on an event rather than on a timer.

But it's also the case that interrupted or corrupted packets can get through and corrupt the file without the client computer knowing this, or some other user can corrupt a shared file and this still not cause problems for other connected users (likely because the local temp files don't include the data pages where the corruption actually happens).

That's what was puzzling me a bit when I discussed about TCP being reliable- but it does me no good to assume it's the TCP being used. If we know what protocol Windows uses for networking files on a remote drive, that would help the discussion, I think.

3. Jet objects can be much more safely shared than Access objects, even though both data tables and Access objects are all stored in Jet tables. The difference is that since A2000, the entire Access project is stored in a single BLOB field in an Access system table. Previous to that, each Access object was stored in its own record in a system table. This is why before A2000 you could save each object individually, without needing to deal with the multiple-object save dialog. In A2000, since you're writing to a single field in a system table, you have to decide if you're going to save all the objects that have been edited or not. What this means for sharing a front end is that before A2000, sharing a form meant sharing a single record for each form, so the locking was much more granular. Thus, corruption in a shared front end was much less likely. In A2000, you're sharing a single record with data that has a very complex internal structure, and that is a much larger chunk of data. Thus, the possibility of corruption skyrockets with the monolithic save model.

Fascinating. That was a nice explanation- I've seen others express some kind of preference for 97 over 2000-2003, and I suppose that would be one good reasons. I can't understand why they decided to make a single BLOB record of the whole collection of Access objects- seems quite backward to me. Not that it justifies sharing Access objects even in 97 anyway.

I, for one, don't believe there's any utility at all in the practice of storing lookups locally. Jet caches data so incredibly efficiently that all you're really accomplishing is making the initial retrieval of the lookup data faster.

Well, to me, it's not just about performance but also cutting on unnecessary network traffic. Why saturate the wire with requests for same data that does not change or changes slowly? By doing so, the concurrency improves and the application can then scale better. This is same principle when working with a server-based RDBMS. It's easy to say that a query that takes one second to execute is acceptable but if it could be optimized to do the same thing in a millisecond, this is even better because the chances of contention are much reduced and users get good experiences out of the application without needing to break out the wallet for expensive hardware. Perhaps it's not justified with a Jet backend, but it certainly is with a server RDBMS backend and I just think it's good principle to design a good citizen.

I don't believe the minuscule benefit from that is even close to being sufficient to justify the overhead of maintaining the local copies of the lookups, so I keep lookups in the back end.

Maybe this may be case, but I don't think we can honestly hold this for all possibilities.

the problem with WiFi is that the connection itself is so unreliable, not that the protocols in use are less reliable

Um, I thought I said that TCP will attempt to cover the unreliable nature of the connection by sending back the requests for packets where checksum fails and so forth, which manifests to the end users as slow network connection. The applications is ignorant of what TCP is actually doing to hide that fact. But that goes back to the question... is TCP/IP really the protocol that is used to share a file across the network? I would be floored if it was using a different protocol, especially one that does not check the packets as it arrives such as UDP.
 

Banana

split with a cherry atop.
Local time
Today, 08:19
Joined
Sep 1, 2005
Messages
6,318
Oops, missed this.

Also enjoyed reference to Fermat's last Theorem:) According to wikipedia not finally proved till 1995

Thanks -- looks like the date I had remembered referred to one of milestone toward proving special cases but not the actual general proof. It also appears that he didn't die 'few days later' but rather 30 years later after writing that note in the margins, which kinds of me want to doubt his credibility in claiming that he had the purported general proof.

However, I liked that if you look at the related link about his theory in fiction, there is this:
In Tom Stoppard's play Arcadia, Septimus Hodge poses the problem of proving Fermat's Last Theorem to the precocious Thomasina Coverly (who is perhaps a mathematical prodigy), in an attempt to keep her busy. Thomasina's (perhaps perceptive) response is simple—that Fermat had no proof, and it was a joke to drive posterity mad.
Link

Whether this is true or not, I think Fermat indeed succeeded. :D
 

Galaxiom

Super Moderator
Staff member
Local time
Tomorrow, 01:19
Joined
Jan 20, 2009
Messages
12,852
Interesting about the change in file structure between Access 2000 and later versions.

Ironically with Word 2007 they went the other way. Word files are now zip archives with separate files containing the text, formatting and links among other things.

You can actually open the archive and edit some of the stuff. It was really useful when I needed to change the path of every hyperlink in a large document. Doing it in Word would have required a lot more work.
 

Users who are viewing this thread

Top Bottom