Connecting databases stored in different networks (1 Viewer)

aman

Registered User.
Local time
Yesterday, 20:05
Joined
Oct 16, 2008
Messages
1,250
Hi there

My boss wants me to present some ideas on how to link frontend and backend if they are stored in different networks .

The frontend will be stored on one network in a company and the backend Is stored a different network in another company .
Can we do something like exporting data in .csv file which can be sent by one company to another company and then some way of importing the data on the backend of another network and also when the change is made to that data then it should reflect the change in another network as well.

Any ideas are welcome .
Thanks
 

CJ_London

Super Moderator
Staff member
Local time
Today, 04:05
Joined
Feb 19, 2013
Messages
16,610
access needs to be able to see the back end in order to work so if company A cannot see the network of company B, it won't work.

Yes you can send files to an email folder where the receiving outlook triggers a receiving event and has code to update access, but it will be messy, slow and has risk of errors.

think you need a different approach - look at using something like sql azure or amazon web services for the back end, you can then link access front ends to it from a number of locations.
 

Solo712

Registered User.
Local time
Yesterday, 23:05
Joined
Oct 19, 2012
Messages
828
Hi there

My boss wants me to present some ideas on how to link frontend and backend if they are stored in different networks .

The frontend will be stored on one network in a company and the backend Is stored a different network in another company .
Can we do something like exporting data in .csv file which can be sent by one company to another company and then some way of importing the data on the backend of another network and also when the change is made to that data then it should reflect the change in another network as well.

Any ideas are welcome .
Thanks

If I understand you correctly, you are talking about multiple versions or replicas of the same database in different locations that need to be synchronized. That is a difficult proposition and doable safely only if the database is partitioned, so that one location only writes into its own work space and all the constraints, data validations, and business rules are determined within that work space. If there is anything in your org close to a need for real-time availability of all data in the database (across the networks) then I would not recommend it. It's asking for trouble. Look into an inexpensive cloud solution like SQL Azure.

Best,
Jiri
 

Mark_

Longboard on the internet
Local time
Yesterday, 20:05
Joined
Sep 12, 2017
Messages
2,111
Two most important questions before we can give you a meaningful answer,
First, what it the network topology?
Second, what level of accessability do you need to the back end?

If this is to be accomplished by a courier bringing you a 9-track tape the answer will be far different than if you have a T3 connection.
 

aman

Registered User.
Local time
Yesterday, 20:05
Joined
Oct 16, 2008
Messages
1,250
guys i'm back again . I got some clarification of requirements now. Hope to receive some help from you guys on this:

Company A: has got Access front end and backend linked together .

' Not sure of this middle layer (is there anything like email fiunctionality that will send data from company A backend to Company B )

Company B: should be able to access data , make some changes in data and send it back to Company A.

Compnay A : then should be able to update its backend with the changes .
 

Mark_

Longboard on the internet
Local time
Yesterday, 20:05
Joined
Sep 12, 2017
Messages
2,111
aman,

You really need to find out how the two will be connected as well as how quickly changes at "Company B" need to be reflected.

If it is by email, I'd export either an excel file or create a "transport" DB that you can enter all of your changes and additions into, as well as flag any deletions in. EMail to recipient, When they get around to it, they run a process that updates files based on your requested changes.

This is a pretty horrible way to do it as you have no control over how quickly changes happen. Only time this makes any sense is if the users as "Company B" cannot change most existing data but are adding new (go to client, get clients report, send back to company).

If you have any type of dedicated connection that won't let you directly connect to the back end, I'd do an FTP of a formatted text file. This lets you have a "record" of what happens and can include check sums. Basically you have two locations files are dropped to "Company A" and "Company B". Company B writes a text file and drops it in the "Company A" file drop. Process in Company A reads in file and updates back end. Process then finishes by writing a file to "Company B" that is read in by your FE to let you know the record has been updated. This way the front end at "Company B" knows the transaction completed.

Very "old school" way of doing this but one that does allow for a decent amount of security.

For an actual good recommendation please get clarification on not only what type of connection is going to be set up but what permissions "Company B" has to the back end at "Company A".
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Yesterday, 22:05
Joined
Feb 28, 2001
Messages
27,148
This is the kind of problem that requires you to define transactions that you will allow to occur, and what you then do is build an external transaction file that details the changes.

When I worked with the U.S. Navy, we had literally dozens of ORACLE-based and SQL Server-based databases that were located in different sub-nets. What we had to do was build an entire protocol of transactions that we could communicate from point A to point B. FURTHER, we had to build the protocol to include ACKNOWLEDGEMENT and REJECTION responses from B back to A. In fact, either side could accept or reject the other's sides transactions if there was a failure in the cross-checking done to verify that our transactions were properly aligned.

The main machine for which I was the primary systems administrator talked to not less than eighteen different systems. In EVERY case, they sent us transactions and we sent back responses plus our own transactions. Which usually meant that we transmitted text-record files that described what had changed. This was a military system so I can't tell you what the actual transactions contained, but I can synthesize an example.

On a given day, we might send records that LOOKED LIKE

#1, 12152017, "Action=Personnel", "SSN=314152653", "Name=Peter Piper", "Event=Enlisted", "Unit=12345", etc. etc.
#2, 12152017, "Action=Personnel", "SSN=141421486", "Name=David Twomey", "Event=Discharge", "Unit=98765", "Type=Honorable", etc. etc.
#3, 12152017, "Action=Personnel", "SSN=333333333", "Name=Mike Smith III", "Event=Promotion", "Unit=45432", "Rate=E6", "Designation=YNC", etc. etc.
#4, 12152017, "Action=Accept", "Record Number=3", "Record Date=12142017"
#5, 12152017, "Action=Reject", "Record Number=5", "Record Date=12142017", "Cause=No Such Member"
etc. etc.

In the above, #1-3 are new records; #4 and 5 show disposition of previous records.

And the numbering of records was important, since you had to know if a transaction failed and that meant storing the pending transactions that had been sent until such time as an acceptance or rejection came back.

In essence, when dealing with non-linked databases this way, you need to build a protocol of how you want to pass information back and forth between the databases and that means a MASSIVE design phase for what your databases can tell each other.

If there is ANY WAY to legitimately have a shared backend between the two databases, it might be cheaper than the design, testing, and implementation of such a complex thing as a formal data-exchange protocol. And before you say, "I don't need anything that complex" - trust me, the day will come that you have to resolve a dissonance between the two databases and without good transaction records, you will NEVER get it straight.
 

Pat Hartman

Super Moderator
Staff member
Local time
Yesterday, 23:05
Joined
Feb 19, 2002
Messages
43,233
As you can see by Doc's response, bidirectional syncing is a nightmare of major proportions. You're going to need to think long and hard about how closely sync'd these databases need to be. For example, can you replace the remote copies from the master weekly and in the interim send add/change requests from the remote copies daily?

You could still be working out the design of this next year and then you have to implement it.

Perhaps you should consider using Citrix. That will give all users the ability to connect in real time to the exact same application. Only the host company needs to run Citrix server. Their own local people can continue accessing the appliaction via the LAN as they do currently. Only the remote users need to use Citrix. The Citrix (RDP may also work) solution does not require any application changes.
 

Mark_

Longboard on the internet
Local time
Yesterday, 20:05
Joined
Sep 12, 2017
Messages
2,111
Doc,
My condolences with that. Last time I had to do something like this was in the 90's. Real pain. Did save our bacon though when the server crashed in a horrible, horrible way. We had backups of the transaction files so we could re-import everything rather quickly.
 

The_Doc_Man

Immoderate Moderator
Staff member
Local time
Yesterday, 22:05
Joined
Feb 28, 2001
Messages
27,148
Mark, keeping transaction files saved our skin once when our dedicated database server back-end died a horrible death on us.

We had an SQL-flavor of database on a UNIX-like system called ShareBase. (Some of their engineers later went to work for Sybase.) The machine had mirrored (RAID-1) drives. But we had a "hot spot" on one of the disks and, due to the primitive nature of the O/S, didn't realize it. The O/S wasn't a full-blown UNIX, it was "tailored" to be a "black-box" machine for database use only.

So one day in 1993 we got this console message that a ShareBase data disk had been demoted from RAID-1 to JBOD (Just a Big Old Disk). Fifteen minutes later we got the message that the SAME LOGICAL DISK was demoted to OFFLINE status. Needless to say, everything went to Hell in a handbasket.

Took us two days to physically rebuild the system with two new disks to establish blank mirrors. Then we had to LOGICALLY rebuild the system by searching backwards through time on our backup tapes until the FE and BE matched up exactly with the same "generation" number - essentially an autonumber that marked a time when we had a coherent snapshot of the whole database. Loaded that backup on the disks, started the FE in restricted mode.

Then we replayed all of the transactions forward, stopping at each glitch to manually fix it and play in the response transaction for that message. Took us 3 weeks to come back fully online and we had a backlog of over 500 new transaction files to replay that had accumulated on the FE side in the 3 weeks that the BE was down hard.

Shortly after that debacle, we ditched the ShareBase and went to an ORACLE solution that gave us greater control of (and greater ability to detect) disk hot spots. We moved certain "popular" files around until the load was balanced among the disks. After that we kept track of the general accounting logs to show which disks had the greatest I/O traffic and we also kept track of the system device error logs to note if any disk was starting to throw errors on either member of the mirror sets. You know what they say... once burned, twice shy.

After that change we never had that problem again in over 24 years with some flavor of ORACLE as the BE. But those transactions essentially saved our butts because we never lost a record. Everyone got paid. And we had no more than half-a-dozen official Congressional inquires of the form "WTF? Why is my constituent not being paid?" We did well enough that nobody had to testify before Congress. There are other projects for which that cannot be said - but fortunately, none of them were mine.
 
Last edited:

Lightwave

Ad astra
Local time
Today, 04:05
Joined
Sep 27, 2004
Messages
1,521
I would definitely research single azure backend linked to the required number of front ends. Creating your own transfer protocols probably is for the confident and interested.

A developer SQL Azure database starts at about £5 month this should be all you need until you have the principle up and running after which you can upgrade the database to something more substantial (I am told without any break of service)

On that point has anyone flicked form one tier to another in SQL Azure - What did you find? Nothing I suspect - I've got some SQL Azure databases but never changed tier.

The main thing that might hold you back is that you will probably need a debit card to kick off signing up to Azure some companies heavily restrict that kind of thing (understandably) but as a personal development thing it is probably worth signing up for the Pay as you go option just top experiment
 

Pat Hartman

Super Moderator
Staff member
Local time
Yesterday, 23:05
Joined
Feb 19, 2002
Messages
43,233
To summarize.

- Citrix and RDP are the only solutions that do not require a rewrite of the application. Either party can be the host or you can use a third party but the third party option will end up being ~ $45 per month, per user but includes the O365 license.
- Two separate instances of SQL Server can be sync'd but you will still need manual intervention to resolve conflicts.
- Using a shared Azure BE will require rewriting the FE to use unbound forms and even then you may run into the app being too slow to be effective.
- Attempting to sync yourself is not something I would want to attempt.
 

Users who are viewing this thread

Top Bottom