Better to keep all tables in the same database (be vs. fe)? (1 Viewer)

dfenton

AWF VIP
Local time
Today, 18:13
Joined
May 22, 2007
Messages
469
Does this [use of NETBIOS and IPX/SPX] remains to be the case even today?

Novell switched to TCP/IP for its networking stack over 10 years ago, and NETBIOS is only going to be in use in peer-to-peer networks. I'm a huge fan of NETBIOS, actually -- it's incredibly fast and efficient -- but it's non-routable, so not really usable on a network of any size.

I'm not sure I understand the reasons for needing to ping the file every second. Seems to me it would make more sense to check .ldb file based on an event rather than on a timer.

The reason is to show you updates from other users. This interacts with the refresh interval and with record locking (optimistic or pessimistic). It has to happen on a timer so that, for instance, you can be given a visual representation of the locking status of a record in the record selector.

That's what was puzzling me a bit when I discussed about TCP being reliable- but it does me no good to assume it's the TCP being used. If we know what protocol Windows uses for networking files on a remote drive, that would help the discussion, I think.

It's irrelevant to Jet/ACE, which knows nothing about the networking layer. To Jet/ACE, a local hard drive and a file share on a remote server (mapped to a drive letter or not) are exactly the same. I don't think knowing the networking protocol helps the discussion at all, but it would help in troubleshooting. In general, there is no difference in terms of reliability in regard to Jet/ACE for the various network protocols because Jet waits a really long time (in computer terms) before it gives up on a spotty connection. On a local LAN, TCP/IP transmission errors are going to be corrected in milliseconds or less, and if they're not, it means something is badly wrong at a level either above TCP/IP (something's blocking packets) or at a level beneath it (physical wiring). I think that this is the case for all the common protocols, so I don't really think they make a difference.

I can't understand why they decided to make a single BLOB record of the whole collection of Access objects- seems quite backward to me. Not that it justifies sharing Access objects even in 97 anyway.

I think it was for compatibility with the VBE, but I could be wrong on that.

Well, to me, it's not just about performance [in re: caching lookup tables locally] but also cutting on unnecessary network traffic. Why saturate the wire with requests for same data that does not change or changes slowly?

The amount of data involved is minuscule, and has to be retrieved only once per session, anyway (if it's volatile enough to need to be re-retrieved, then it's not a candidate for local caching). Now, one argument for your approach might be that there is locking overhead across the wire that wouldn't be there for local tables, but the overhead of a read lock is very small. You know the tables are not going to be updated, so there's never going to be any write lock contention, so I'd say this is a really minor consideration.

By doing so, the concurrency improves and the application can then scale better.

I think you're futzing with optimizing in sub 1% performance issues (substantiallyl less than 1%, in fact). Record locking aside, the only data you've avoiding loading across the wire is that first request for the lookup table within each Access session. Since we're talking about lookup tables, we're really considering sub-1000-record tables. Even with larger static lookup tables (such as a zip code lookup table), I think Jet is going to be efficient enough (remember that it doesn't pull full tables, but uses the indexes to decide which data pages to retrieve) to make the difference between caching it locally and pulling it from the server worth the maintenance overhead.

This is same principle when working with a server-based RDBMS. It's easy to say that a query that takes one second to execute is acceptable but if it could be optimized to do the same thing in a millisecond, this is even better because the chances of contention are much reduced and users get good experiences out of the application without needing to break out the wallet for expensive hardware. Perhaps it's not justified with a Jet backend, but it certainly is with a server RDBMS backend and I just think it's good principle to design a good citizen.

I think you're worrying about an amount of data that is at least 1 or 2 orders of magnitude below the threshold that is actually going to make any real performance difference. Without Jet's excellent data caching, or with much slower networks (your approach might be justified for large lookup tables on a 10BaseT network, but nobody is using that any more, right?), the extra work your approach entails might be worth the effort. But with modern networks (increasingly tending towards 1GB instead of just 100Mbps) and modern PCs (plenty of RAM for caching data and plenty of available CPU cycles), I think it's not necessary to do the extra work.

I'm not certain it ever was, to be honest.

Um, I thought I said that TCP will attempt to cover the unreliable nature of the connection by sending back the requests for packets where checksum fails and so forth, which manifests to the end users as slow network connection. The applications is ignorant of what TCP is actually doing to hide that fact. But that goes back to the question... is TCP/IP really the protocol that is used to share a file across the network? I would be floored if it was using a different protocol, especially one that does not check the packets as it arrives such as UDP.

The problem with WiFi is at a much lower level than the networking protocol -- it's the "physical" layer that is unreliable ("physical" in quotes, since it's the radio signal that I'm referring to). It's as though you had a wired network and periodically, somebody unplugged your central hub. There's nothing any networking protocol can do about that -- and TCP/IP shuts down when the physical network layer is absent (try starting a Ping xxx.xxx.xxx.xxx /n 100 and turn the switch off on your WiFi card). That is, when your network adaptor can no longer find the wireless network, it turns off all TCP/IP services. UDP or not, if the network device is disabled/inaccessible, there's no protocol that can overcome that.

Also, WiFi timeouts are at least an order of magnitude larger (and probably several magnitudes larger) than what Jet/ACE can recover from, and larger than what TCP/IP can recover from, as well. It's not too large for streaming protocols or HTTP requests, but that's because the clients for those protocols have been built with long time outs because they know the connections are unreliable when considered end-to-end. They are also stateless connections, so they can get away with buffering and resuming after a pause. Jet/ACE is the least stateless application you're ever going to encounter, so it just can't recover from anything other than the smallest disconnects.
 

Banana

split with a cherry atop.
Local time
Today, 15:13
Joined
Sep 1, 2005
Messages
6,318
The reason is to show you updates from other users. This interacts with the refresh interval and with record locking (optimistic or pessimistic). It has to happen on a timer so that, for instance, you can be given a visual representation of the locking status of a record in the record selector.

Makes sense, I suppose.

It's irrelevant to Jet/ACE, which knows nothing about the networking layer. To Jet/ACE, a local hard drive and a file share on a remote server (mapped to a drive letter or not) are exactly the same.

We both agree on that point.

I don't think knowing the networking protocol helps the discussion at all, but it would help in troubleshooting. In general, there is no difference in terms of reliability in regard to Jet/ACE for the various network protocols because Jet waits a really long time (in computer terms) before it gives up on a spotty connection. On a local LAN, TCP/IP transmission errors are going to be corrected in milliseconds or less, and if they're not, it means something is badly wrong at a level either above TCP/IP (something's blocking packets) or at a level beneath it (physical wiring). I think that this is the case for all the common protocols, so I don't really think they make a difference.

Aha, I think I see where I stumbled. I was thinking that JET/ACE was giving up too prematurely before TCP/IP or any other protocols would give up. If it is indeed the case that the JET/ACE timeout is more generous, then I can see your point here.

I think it was for compatibility with the VBE, but I could be wrong on that.

Ok. Still, it's backwards. :\

The amount of data involved is minuscule, and has to be retrieved only once per session

...

I think you're futzing with optimizing in sub 1% performance issues (substantiallyl less than 1%, in fact).

...

with much slower networks (your approach might be justified for large lookup tables on a 10BaseT network, but nobody is using that any more, right?)

FWIW, in two projects where I did this, one was a MySQL server across a WAN for three remote offices, another was a SQL Server & Citrix for clients worldwide and in both cases, there were huge performance difference when I moved the lookup tables from backend to front-end. Furthermore, I felt that setting up the mechanism to update the lookup tables periodically were easy enough.

I wouldn't argue that this won't be overkill for a JET-only scenario on a local network. It just may been my luck, but I run into the need more often than not.

Also, WiFi timeouts are at least an order of magnitude larger (and probably several magnitudes larger) than what Jet/ACE can recover from, and larger than what TCP/IP can recover from, as well. It's not too large for streaming protocols or HTTP requests, but that's because the clients for those protocols have been built with long time outs because they know the connections are unreliable when considered end-to-end. They are also stateless connections, so they can get away with buffering and resuming after a pause. Jet/ACE is the least stateless application you're ever going to encounter, so it just can't recover from anything other than the smallest disconnects.

Indeed. Two things.

1) If it's indeed the timeout and not bad packets that are usually the cause of corruption, then I have to wonder why JET/ACE would do this - it's almost as if they wrote a self-destruct code in event of time out rather than gracefully rollbacking to a consistent state then giving up.

2) While I would not want to recommend this as a potential solution, I would like to know why we would have registry settings such as LockTimeOut and LockRetry as well other related settings which could be in theory modified to accommodate a spotty network? (Note that as said before, I wouldn't think it a good long-term solution and would prefer to fix the underlying problem, but am curious as to why it hasn't been suggested before even for Wifi)
 

dfenton

AWF VIP
Local time
Today, 18:13
Joined
May 22, 2007
Messages
469
1) If it's indeed the timeout and not bad packets that are usually the cause of corruption, then I have to wonder why JET/ACE would do this - it's almost as if they wrote a self-destruct code in event of time out rather than gracefully rollbacking to a consistent state then giving up.

Jet was written to protect your data. It gives up when it can't insure consistent data gets written, and it does it as soon as the connection is suspect.

That's a GOOD THING.

2) While I would not want to recommend this as a potential solution, I would like to know why we would have registry settings such as LockTimeOut and LockRetry as well other related settings which could be in theory modified to accommodate a spotty network? (Note that as said before, I wouldn't think it a good long-term solution and would prefer to fix the underlying problem, but am curious as to why it hasn't been suggested before even for Wifi)

In a single-user scenario, that could work, but in a multi-user scenario, it would be a recipe for disaster.

I think Microsoft is probably much smarter about this than you and I and has probably given a lot more thought to the issue than we have. The limitations are what they are and were reasonable for the scenario that existed back when Jet was created (i.e., an unreliable networking layer like WiFi was not even contemplated at that time). It's not like you don't have alternatives (i.e., use a server back end, which is going to be entirely immune to these issues).
 

Banana

split with a cherry atop.
Local time
Today, 15:13
Joined
Sep 1, 2005
Messages
6,318
Jet was written to protect your data. It gives up when it can't insure consistent data gets written, and it does it as soon as the connection is suspect.

That's a GOOD THING.

Yet this isn't exactly what happens should someone use the application via wifi or with a faulty/failing NIC. The outcome in such matters is some kind of corruption instead of a dialog warning the users that Jet couldn't continue with such poor and unreliable connection, hence my puzzlement.

In a single-user scenario, that could work, but in a multi-user scenario, it would be a recipe for disaster.

Care to elaborate? I'm quite interested in 'why'.

I think Microsoft is probably much smarter about this than you and I and has probably given a lot more thought to the issue than we have. The limitations are what they are and were reasonable for the scenario that existed back when Jet was created (i.e., an unreliable networking layer like WiFi was not even contemplated at that time).

I agree- the intelligence of the whole development team behind Access is certainly on order of magnitude greater than mine or yours. However, there has been some decisions that has had consequences; take single file solution for example. It made perfect sense in that it was convenient for users to share a single file containing applications and data yet the 'best practices' dictate we ought always split the database or else risk instability. Unreliable networking is other such example, and I'm hoping to learn more about how this occurs at lower level.

It's not like you don't have alternatives (i.e., use a server back end, which is going to be entirely immune to these issues).

I agree and would say we in fact have more alternatives than we did a decade ago (e.g. free database engines that were reliable to be used should we need it for a company that can't/won't pay for an expensive license to a big iron & full-on server RDBMS.
 

dfenton

AWF VIP
Local time
Today, 18:13
Joined
May 22, 2007
Messages
469
Yet this isn't exactly what happens should someone use the application via wifi or with a faulty/failing NIC. The outcome in such matters is some kind of corruption instead of a dialog warning the users that Jet couldn't continue with such poor and unreliable connection, hence my puzzlement.

Very often, there's not damage to data at all, but this is all outside Jet's control, so I don't see you could blame Jet for giving up!

Quote:

In a single-user scenario, that could work, but in a multi-user scenario, it would be a recipe for disaster.
Care to elaborate? I'm quite interested in 'why'.

Well, in the multi-user scenario, if something gets written anyway, without the client Jet instance being able to read the LDB and figuring out whether it has a write lock, it could step on others' edits, or corrupt the file.
 

Users who are viewing this thread

Top Bottom