VBA Replication Substitute (1 Viewer)

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
Hello, all! I have a question about how to deal with the loss of replication in Access.

I will start by mentioning that I work for the government, and am limited to using Access 2013. No extra servers and no online connections – if only!

I had previously chosen to use replication because about half of the users take their laptops into the field (where there is not even cell service) and Add/Edit/Delete multiple records while doing inspections. They then come back to the office, sync and do it all over again the next day. This worked very well – until it didn’t! As of last month all users now have Access 2013 and, as I’m sure we’re all aware, Access replication is no longer an option.

So… I need to figure out a way to make this happen solely via code now, I guess. My searches haven’t been super fruitful, so if there is a forum post or an article somewhere to assist me please point me in the right direction!

My first thought is that there will have to be timestamps for all edited/added/deleted records in the database to determine what was changed since the last “sync”, teamed up with a plethora of update/append/delete queries for each affected record – am I on the right track, or am I missing something important? I suppose I will also have to work out a new Primary Key numbering system to eliminate duplicates between the remote users...

Any help is appreciated. My VBA skills are passable, but if people want to throw out specific functions they may use to assist me in wrapping my head around the details, I won’t stop them! ;)

Thanks, folks! I don’t think I can do this without your assistance!
 

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
My first thought is that there will have to be timestamps for all edited/added/deleted records in the database to determine what was changed since the last “sync”, teamed up with a plethora of update/append/delete queries for each affected record – am I on the right track, or am I missing something important? I suppose I will also have to work out a new Primary Key numbering system to eliminate duplicates between the remote users...
you are on the right track - you can make the autonumber key random rather than increment but will also need a GUID field for absolute record ID to cross compare on syncing - plus a timestamp field. An alternative is to have combined index of primary key as say machine id

It's fairly straightforward when only syncing tables. Problems come when you want to sync changes to the other objects - queries forms etc. And you will still need to develop a clash handler (where both master record and local record have changed since last synced)

Alternative is to move backend to sql server express - I think that has a syncing facility.
 

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 14:46
Joined
Jul 9, 2003
Messages
16,271
I kept a note about some replication software from 1996.. I think this predates Microsoft introducing replication into MS Access, I think it was 2000 when it was introduced not 100% sure... There is a sample database, not sure for what version of access in other words you will probably have trouble opening it! Anyway here's the link:-

http://trigeminal.fmsinc.com/lang/1033/codes.asp?ItemID=16#16

and this is what you'll need to look for

Code from Smart Access Article "Manage Replication Conflicts"
In the May, 1996, issue of Pinnacle's Smart Access, Lynn Shanklin and I wrote an article entitled "Manage Replication Conflicts" that covers the ins and outs of replication conflicts and how to minimize and manage them. That article was so popular I still get e-mails every month about it. This was not helped by the fact that the whole article has been included in full (but without source code) on MSDN for the last few years! To see the article, you can go to the Pinnacle site or MSDN on the web. For the code, you can download it from here in the file conflicts.zip. (Pinnacle charges $5.00 for each article.* I do not profit from this).

I have no idea if it will be any good to you it's something I kept just in case I needed to consider doing replication...
 

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
Alright Uncle Gizmo, I'll take a stroll down that rabbit-hole! MSDN does not seem to have the article any more, but has some similar ones. I'll try to download the zip on my home computer later.
Thanks!
 

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
It's fairly straightforward when only syncing tables. Problems come when you want to sync changes to the other objects - queries forms etc. And you will still need to develop a clash handler (where both master record and local record have changed since last synced)
Alright, that makes me feel better! The remote users would only be making changes to tables, so I won't have to worry about non-table objects.

I'll get to work on changing the AutoNumber to random and creating GUID/Timestamp fields for each table - do you know of any examples of the kind of code I'll need to do the actual sync? I feel iterators are the key, and that is not my forte. :eek: The clash handler will be fun too, but one thing at a time! Hopefully Uncle Gizmo's link will help with that.
 

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
do you know of any examples of the kind of code I'll need to do the actual sync?
I don't but logic says you need to

identify records in local and not in master - then append to master
identify records in master and not in local - then append to local
identify records in both local and master then
.....those that have not changed since last sync in either - ignore
.....those that have changed in master since last sync, but not in local - update local
.....those that have changed in local since last sync but not in master - update master
....those that have changed in both since last sync - clash management resolved by an unambiguous business rule or user selection to update one or the other or user selection to update field by field (sort of thing to consider is the situation where in say a customer record, in the master the phone number has been changed, in the local the fax number has been changed) or leave unreconciled to be handled at a later date - perhaps user needs to talk to someone to help resolve the clash

to summarise the minimum field requirements you need

autonumber primarykey - random
GUID key for syncing
timestamp - date/time of last append or update
lastsynced - date/time of last sync

things which won't be caught but you may need to consider - a new customer is added in both local and master. You will potentially end up with a duplicate customer. So may need additional functionality to compare newly added records based on say a customer name

Also, it is difficult to delete data. If a record is deleted in the master - the first query above will append it again since it will meet the criteria. Usual solution is to have an 'is deleted' field or a 'to date' to indicate when a record ceases to be active.

One other tip. You may be tempted to use a GUID as a primary key and save the autonumber field. You can but be aware that a GUID is significantly larger than an autonumber (4 times larger) so your indexing will be slower - and of course propagated across the other tables as family keys will cause db bloat. You need it for synchronising purposes only
 
Last edited:

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
Thanks for the logic, CJ - this will assist me with doing it in bite-sized chunks. And I was going to ask the GUID question so thanks for the pre-emptive answer!

A couple thoughts:
- I can remove users' ability to delete records (not a big deal - deletes are not really something that happens much here), which would alleviate the delete problem. If there is a need in the future, I can add an IsDeleted Boolean field to the necessary tables.

- Would It be simpler to just append/update the master with the local changes and then replace the local BE with the master? Or would that be considered lazy? ;)

Thanks again - you are the best!
 
Last edited:

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
its not lazy if it does the job - just efficient

The only issue I can think of with just updating the master and not the local is that your business rule for unresolved conflict becomes 'master wins'
 

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
As long as all conflicts:
a) are resolved by the user, or
b) are appended to a tblConflict in the Master
before the override, that wouldn't necessarily be a problem?
 

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
a) means conflicts are solved, so that would be OK
b) sounds like a good workaround
 

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
I'm sorry to be a bother, but after a day of (mostly fruitless) researching I have a question now about GUIDs - why are they so necessary for database replication? I only have at most 10 remote users entering ~300 records total - wouldn't random AutoNumbers fulfil the unique aspect? Or concacted with DateTime at most?
Should I be doing queries somehow on EVERY record in EVERY table at once? I was thinking that I would be looping through every table to accomplish my goal but perhaps there is a more efficient solution that I am missing...
 

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
potentially just a random PK would be sufficient - but random means random. it doesn't mean unique - which GUID's will be. As I mentioned in an earlier post, you can create a unique number by combining say an autonumber field and machine ID, but that means more work and you need to avoid say

autonumber 12, machineID 1

and

autonumber 1, machineID 21

both of which will produce 121 as a 'unique' number

It's your project, you have to decide what is sufficient to ensure you can synchronise successfully.

Addendum - you need to decide the order of tables - typically a collection of tables will have a 'root' - a top parent (you may have more than one collection) so you need to do the 'parents' followed but the child tables e.g. customers (top parent), products (top parent), invoice header(a child to customer), invoice row (a child to customer and a child to products). And your queries are on a table by table basis, not record by record.

And if you don't want to use a GUID use a number for unique ID's - text will be as inefficient as a Guids
 
Last edited:

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
Okay, so I will be syncing one table at a time. I just wanted to make sure the keys didn't HAVE to be unique throughout the whole db - I have quite a lot of records, so I will choose something a little less "bloaty"!
Thanks once more! Tomorrow I start on the queries!
 

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
Note I have added an addendum to my last post.

Regards you latest post - keys need to be unique by table
 

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
Alright, I'm following you! I had not considered the fact that I have to do the root tables first - there goes my loop through each table idea!
I'll get to work on this and probably post a new thread if (when!) I require query assistance.
Thanks for the guidance, CJ!
 

CJ_London

Super Moderator
Staff member
Local time
Today, 14:46
Joined
Feb 19, 2013
Messages
16,606
had not considered the fact that I have to do the root tables first
probably only matters if you are maintaining referential integrity but still good principle to follow. Parents come before children - happens in real life as well:D
 

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 14:46
Joined
Jul 9, 2003
Messages
16,271
Today I was using "Fast Glacier" to recover some lost files from Amazon storage.

The "Fast Glacier" software very nicely produced a log of what it was doing.

I was able to monitor what was going on. I thought it might be an idea if you included some sort of logging system that records when records were added deleted or clashed something like that.
 

Pienuts

Registered User.
Local time
Today, 07:46
Joined
May 2, 2014
Messages
106
Using an audit log to point to where the changes happened was something else I was considering. I think I might take another crack at it - it might speed up the process significantly if I collect the right data.
 

Users who are viewing this thread

Top Bottom