Now there's a question I can answer with fact despite George's objections. Back in the 70s and 80s when mainframes and super minis roamed the dp departments (or rather the departments roamed around these pizza ovens), there was a thing called shared computing. This means companies often joined hands to share the cost of these beasts that had 64k of core memory and 30 users.
For twenty years I ran a service bureau that served large corporations with a specific computing and data entry need of claims processing. This was after I did my time at McDonnell Douglas as tech support for the super-mini that they manufactured and sold with the PICK OS/database. Some of my clients were McDonalds, the city of Los Angeles, many car and truck dealerships all having remote computing. I tend to group referential integrity and synchronization together when I talk about these environments, because we had to write the code to do both underneath our applications. I wasn't the only one doing this as McDonalds and Carter Haley Hale had massive distributed systems that all worked together doing the same thing over periodic modem dial-ups to a "main" db and they too wrote their own flavor of referential integrity and synchronization between split back ends. But this very real history is a gray wall to George.
Let's talk about one simple example that was employed in my business. We had about forty clients that contracted with us. The average for each was about a half-billion dollars in processing that balanced to the penny each month; kind of like a bank. For each of them we had a separate account (now called a db file) on our system. In that account was their data, not specifically shared with any other client, (however we did link the accounts together sometimes to pull off some massive cumulative comparisons when the FTC challenged one of our clients with anti-competitive practices). All the clients did share two different databases though. One was for common data like country, state, city, Arbitron/marketing data, etc. and one massive media file that kept track of every radio, TV station, newspaper, magazine, penny saver like publication in the USA and Canada. All those files weren't specific to any one client, It was as we said, universal. We also had another account, very well protected, where all the software was. Lots of time-share, accounting and payroll companies used the same model. Only later did the idiots at Microsoft think it was a good idea to mix data with programming in one place. If I start a payroll or claims processing service here in Colombia in the future, I'll be following that model again as much as possible for fiducial responsibility reasons. For one RFP of the government here, I'm looking at distributed sites that will process the data of 45 million people needing government health care. That's not all going to fit on one ACE or SQL database. However, I do like the idea of using Access as a front end, so that slightly sophisticated users can pound sand all day making queries with the data they are responsible for in a geographic area without bothering me too much.
Not in this thread, but another, did I pointedly ask about distributed processing between different back end dbs in the Access environment. I'm surprised that I've gotten little positive input, given that Access has been around about twenty years longer than our old-clumsy mini/mainframes were at the time.