Audit Update of a Field

Marshall,

Just saw your posts.
Background:
I was interested in "data macros" and could not find examples, articles etc. So I decided to build something to understand the concept, how it worked for me and then share. That was Spring 2017, and that's all I did with data macros.


Can they be used/created with DDL?
No. As best I know you work with the macro interface (which is klunky/unwieldy as others have said.)

I saw in JDraw's example that I can export the macro as text, but I'm not sure if there is an easy way to import the text into the new/production BE.
I did little with respect to saving and loading text. Mike Krailo indicated he had success with saving and loading data macros as text (see this link).

Why are there TWO macros required?
I decided to use macros to gather the CRUD (add/modify/delete) data and have a separate macro to write the data to the audit log. So I had macros for each table and a macro specific for writing the audit record.

Why do I need macWriteAuditRec. Why couldn't I just set up tblAuditLog with "Create new record in tblAuditLog, Set Field1 = Parameter 1, etc."?
You don't. You could design your data macro/audit logging without using a named macro. I did what I did as a learning exercise and chose to have a separate macro write the audit record.

I want a real person's name when I display the audit results.
You can use a user defined function with a table of usernames to real names or another mechanism that suits your requirement. You can get as much or as little data re the source/person who effected the audit record. You can add/remove fields from the audit table depending on your needs.

I would like to have a button that displays the audit results for the current record only.
You can access and query the audit table as you would any other table.

Good luck with your project. Let us know your progress.
 
Last edited:
  • Where do I run the commands from? Obviously, I would want to start with a development copy of the backend and modify that and then later run all of the changes to the production BE, but do I run the statements from a FE linked to the BE, or do I open the BE directly, or something else?
  • Where and how do I save the DDL commands once I know they are working?
  • As I understand it, DDL only modifies structure, not data. I.e. I can add a field to the table, but if I want to populate the field in my test BE and copy that data to my production BE, I would need to use some type of SQL statement to do that. (Although the data can be populated while other users have the BE open).
  • How do I build the statements? i.e. let's say I add NewField1 to Table1 of my database via DDL and it works in the test BE. Now I want to add NewField2 to the same table. If I just add a new line to the DDL code, I'm guessing it will crash since NewField1 already exists. So do I run each command individually and then go back and join them together, or do I need to start with a clean copy of the existing production BE each time (I guess either way would work).
  • Is there a way to set the order of the table fields via DDL? (I know it really doesn't matter, but if I am adding NewField5A and I want it to appear after Field5, is there a way to do that via DDL. (Normally, I would just go into Table Design View and click on either Field 5 or Field 6 and say Insert, but…) (I suppose I could manually open the table in the backend in design mode (with exclusive access) and change the field order, but that is only slightly less involved than just adding the field without using DDL).
I thought I attached this earlier but maybe I didn't. This is a very old sample database that I used to update an ACE BE from one version to another. I ran the code so there wasn't as much validation as I would have needed if I left it to the user.
Obviously, the database will not work for you but you can see HOW it works.
1. The DDL is all saved as querydefs. They are numbered in the sequence that they must be run. I made a decision a long time ago to NOT delete tables or fields. You can if you want but due to the type of Application I was distributing I didn't. Therefore, the querydefs are basically add columns and tables but the relationships might be add or delete.
2. Notice that there are two references to a database. "THISDB" for the code db, including the sql strings of the query defs. And "db" which is used to execute the query string. There might be a smoother solution that uses a loop but this is what I came up with 15 years ago and it worked. I set THISDB to the currentDB() and I set db to the db that was selected on the opening form.
3. The code executes one query at a time. It copies the sql from the querydef using THISDB and executes it using db. So, that answers the first question. The upgrade database stands alone and includes all the DDL. When it runs, it runs the DDL against the selected BE.
4. This conversion has one append query. There is a local version of the data and a pre-linked table.
5. If you need to add multiple columns, you can add them in the same query. Look at q20
6. No, the new fields end up at the end. The only way to change the order is to drop the table and recreate it but that loses all the data.
 

Attachments

Last edited:
@jdraw - great info. Thank you.

I've used the macro interface before, I used to have a complicated AutoExec macro built from it. It's usable, but it is more for people that don't want to be bothered learning VBA and there are some VBA commands (I think) that aren't available using the interface. I ended up converting the federated macro to VBA and just calling the VBA from the interface, but I know how to work with the interface if that is the requirement - which it seems to be.

As far as copying - I'll experiment - probably create the macros in a test database, copy the macros to a copy of the production database, see how that works and then try it out on the live database. As I understand it, from the thread that you linked, there seem to be four options:
  • Ctrl-a and Ctrl-c in the old database and then Ctrl-V in the new database as posted in reply 33 by zeroaccess. I think this will work (I think I remember doing it in the past when I rebuilt the database to a clean copy and transferred the federated macros.) It's probably the simplest solution.
  • Export and Import from the Immediate Window using Mike Kraiko's method in Reply 31, which was your original link.
  • It looks like @ebs17 in reply 38 has a method that would use VBA to do the reply 31 export and import in one step. That sounds promising and sounds like the fastest way to get the data macros transferred. I'm a bit hesitant about it, but it would probably be the fastest way to transfer the data.
  • Of course, there is the manual method of creating all the data in the test BE and manually re-entering it in the production BE (offline and exclusively), but that risks typos and would have the production BE offline for an extended time period.
Based on your reply, I'll try to have one macro copy the data to the audit table and not have a macro to write to the table. Basically, our data has 7 independent tables, so the plan (although there are other approaches), will be to create 7 audit tables and have one DM per table write to the associated audit table. The single macro per table gives me 7 macros to transfer as opposed to 14!

This is "air code" but the current plan is something like this:
  • The back end will have the data macros that write the events to a new tblAuditresults, which will probably contain a PK field, the source db PK field, an identifying field for the record that was changed in the source table, an event name, the old field value, the new field value and the username that created the change.
  • I'll make a query in the FE from the table and can add a field to that for the real persons name.
  • I'll have a button on the FE form that will open a pop-up form to display the query results for the currently selected record. I think I can use one of the events of the pop-up form to filter the results to the currently selected record.
I'll post progress reports. Unfortunately, this is something of a pie-in-the-sky project - meaning that there is a possibility that I will get it all working and management will decide it isn't acceptable and we need to continue with our current process. I and they are okay with that and I know I'll learn a lot either way.

@Pat Hartman - I'm not seeing an attachment in Reply #42 either. I'd be very interested in seeing this!
  • Agreed - deleting tables can be problematic. There are tables that won't be needed after the current database is updated, but if I delete them before everyone updates to the new FE, the old FE will generate errors.
  • I understood Reply item 6, but to clarify for others: You apparently can't adjust the new field order using DDL. You can add the new field using DDL and then manually open the back end and open the table in design view and select and drag the field to the correct position/order without losing data. It's an extra step, but it's still faster and less error-prone than inserting the new field manually.
Thanks to all!
 
@jdraw (or others)

Out of curiosity, you use this function in the demo:
Code:
'---------------------------------------------------------------------------------------
' Procedure : fOSUserName
' Author    : Dev Ashish/MVPS.org
' Date      : 25-Jun-2017
' Purpose   :Returns the network login name of user
'Obtained from http://access.mvps.org/access/api/api0008.htm
'---------------------------------------------------------------------------------------
'
Function fOSUserName() As String
' Returns the network login name
' Code Courtesy of
' Dev Ashish

    Dim lngLen As Long, lngX As Long
    Dim strUserName As String
10  On Error GoTo fOSUserName_Error

20  strUserName = String$(254, 0)
30  lngLen = 255
40  lngX = apiGetUserName(strUserName, lngLen)
50  If (lngX > 0) Then
60      fOSUserName = Left$(strUserName, lngLen - 1)
70  Else
80      fOSUserName = vbNullString
90  End If

fOSUserName_Exit:
100 Exit Function

fOSUserName_Error:
110 MsgBox "Error " & Err.Number & " in line " & Erl & " (" & Err.Description & ") in procedure fOSUserName of Module basModule1"
120 Resume fOSUserName_Exit
End Function

What (if anything) is the advantage of this over simply using "Environ("Username")"? (or is the latter something that VBA recognizes, but the built-in macro code does not).

I see the advantage of using a function for the result and our usernames seem to vary in case, so I would want to LCase() the result so the lookup table entries can all be LCase(), but am I missing something obvious?
 
Marshall,

I chose to use fOSUserName because I had seen it referenced as more reliable. And I followed Dev Ashish back in the old days. You can use what ever satisfies your requirement. Here's a link from Daniel Pineault re Get User name.

Access is case insensitive. So using LCase or UCase is aesthetic, but not required functionally.
 
@jdraw - Thank you! Dev Ashish and Daniel Pineault are great resources and he explains it well. Essentially, the environ variable can be spoofed. I wouldn't know how and they would be in big trouble, but our company makes the usernames fairly accessible (to other employees).

It doesn't make sense to build all of this security/functionality into the database and then have someone potentially pretend they are me to get elevated rights. (Not saying they would, but it makes sense to prevent it.)

I also like Daniel's idea of calling it as a global function at startup - at least in theory. I can see errors if it fails - i.e. there is an unhandled error in the code and the public varUserName gets reset and now my code thinks the varUserName = Null.

Access is case insensitive. So using LCase or UCase is aesthetic, but not required functionally.
Access may be, but is Windows. Obviously, I'm obfuscating, but let's say my username is xy123456. On some systems I log in as xy123456 and in others I log in as XY123456.

Obviously:
If LCase(varUserName) = "xy123456" Then
will work fine, regardless of how I am logged in.

You are saying that
If varUserName = "xy123456" Then
will work even if I am logged in as XY123456?
 
In Windows, passwords are case-sensitive, while usernames are not.
 
I also like Daniel's idea of calling it as a global function at startup - at least in theory. I can see errors if it fails - i.e. there is an unhandled error in the code and the public varUserName gets reset and now my code thinks the varUserName = Null.
Is this a potential concern? I.e. an error occurs and the varUserName gets reset?

More plainly: Is it sufficient to have a public variable varUserName and populate that at startup, or would it be better to have a function that returns the username and then call the function any time the code needed to do something based on the username?
 
Obviously:
If LCase(varUserName) = "xy123456" Then
will work fine, regardless of how I am logged in.

You are saying that
If varUserName = "xy123456" Then
will work even if I am logged in as XY123456?
If you make the comparison in VBA, it depends on the setting in the module header.
Code:
Option Explicit
'Option Compare Database ' usually case insensitive
'Option Compare Text
Option Compare Binary ' <- Standard, if no compare option is specified

Private Sub Test()

    Dim x As String
    x = "ABC"
 
    If x = "abc" Then
        Debug.Print x & " = abc"
    Else
        Debug.Print x & " <> abc"
    End If

End Sub
With the StrComp function, you can use parameters to select the comparison method independently of the VBA compare option.
 
Thanks - I was using Option Compare Database, so it probably should be case-insensitive anyway.

Update - somewhat surprising, but it does work without the LCase().

I used the WScript.Network object and it came back uppercase, even though I thought I logged in in lower case.
I compared the result to lower-case and it came back True.
Then I made a small table with my user name in lower case in one field and my name in the second field and ran ELookup with my username as lowercase and again as uppercase and both returned my real name.
 
Last edited:
Is this a potential concern? I.e. an error occurs and the varUserName gets reset?
Did some research and answering my own question ... (Although the correct answer is "Possibly ...)


Unhandled errors in and of themselves don't cause variables to be reset. In .accde files, variables are not reset by errors. In .accdb files, variables are reset by clicking End when the error message occurs.

I would structure my code so that the reset was non-permissive. (For example, I only allow certain users to add or delete records. I would set the code so that the add/delete button was only visible if the username matched. If the variable was reset and the varUserName was vbNull, I would have the Add/Delete Button hidden.

So for Deployment, it is probably safe to use a global variable.

For Development, there is a fairly high change of me making a typo and causing an unhandled error that I click End on and then am locked out of a lot of functions unless I close and re-open the database, so it's probably better to not use them in this case.
 
I would use a property procedure or function instead of the global variable and possibly use a private module variable or static variable if the user name is not to be determined each time.

Concept:
Code:
public Function CurrentWindowsUser() as String

    static WinUserName as String

    if len(WinUserName) = 0 then
         WinUserName = GetWindowsUserName()
    end if

   CurrentWindowsUser = WinUserName

end function

private function GetWindowsUserName() as String
   '... get Username with API call etc.
end function
 
@Josef P. - That seems to have worked great!!!

@Pat Hartman - Could you please re-post the sample database from Reply #42. I don't see it as an attachment and I searched for forum for something similar and didn't see anything, but I might not have used the correct search terms. Thank you!
 
@Pat Hartman - Thank you!!! I was able to modify your database to add a new table to my test copy of the BE. Worked flawlessly. It was VERY helpful to see HOW to implement the DDL changes. I have a lot more to do, but at least I understand the process now.

@jdraw - I have a somewhat simplistic question on data macros. In the audit table, you include the old value and the new value, which seems like a good idea and information I would want to know. So far, I am only needed to audit changes to date/time fields so I made those fields date/time in the new audit table. If I also wanted to audit a text field, wouldn't I get a "Type Mismatch" doing it this way? How would I work around that, other than having a date audit table and a text audit table and somehow combining the results. As I said, it isn't an issue currently, but I could see it potentially becoming one.

All - Thanks again!!!
 
You're welcome. Once you see how it is done and understand the use of the two different database references (the code db and the to be changed db), it becomes pretty easy;)

The log table needs:
a. an autonumber PK
b. a long integer to hold the PK of the record this data belongs to
c. the column name of the data being logged.
d. the datetime of the logging.
e. ChangedByUser

There are two typical solutions.
1. Use a short text field (255 char) to hold all types of data except long text. Then keep a type field so you can work with the text field properly. Keep a separate field or preferably a separate table for long text fields.
2. Add one column of each data type.

I think #2 is harder to work with.

Also, I think logging the previous value and the new value in the same record makes reporting a whole lot easier. But, if you do it, use the same technique 1 or 2 for the previous value as you use for the new value.

In any event, when I do logging, if I only log old or new value, I always log the "new" value. That means that the logging is very simple and it always goes into the Form's AfterUpdate event. If you also keep before values, you need to save the before value into a variable in the Form's BeforeUpdate event so it is available in the AfterUpdate event.
 
Long post ahead ...

@Pat Hartman - As I'll show, right now I'm using Date/Time fields in the tblAuditLog to store date/time fields. You are saying it would work fine to store these values in text fields. (It's an empty table now, I can change/rebuild it.)

@jdraw - I'm really struggling with the data macros:

First off - I'm not sure how the parameters work. The example shows a parameter name and a description, but if I add a parameter, it just asks me for name and description. Can it really just logically tell what the item should be from this? I wrote the data macro without parameters and maybe that is my issue.

Next, I'm slightly obfuscating the names, but ...

I created a new table which I'll call tblAuditlog with the following fields:
CREATE TABLE [tblAuditLog]
(
[PK] COUNTER Primary Key,
[Orig_PK] Long,
[REFERENCE] text(50),
[EVENT] text(255),
[OLD_VALUE] DateTime,
[NEW_VALUE] DateTime,
[USERNAME] text(20),
[MOD_DATE] DateTime
)

My source table I will just call tblSource and I want to create a record when field "Event 1 Complete" is changed.

I didn't use parameters and I created the data macro and changed "Event 1 Complete" for one of the records and nothing happened. No error messages and no new records in tblAuditLog.

I think what I am missing is you said it was supposed to be triggered and listening for Insert, Update, or Delete, but I don't see how I specify that.

Comments shown after ' are for explanation and not part of the interface.

My data macro (in tblSource) looks more or less like this:
If Updated("Event 1 Complete") Then
Create a record in tblAuditLog
SetField
Name tblAuditLog.Orig_PK ' Should this be just Orig_PK?
Value=[tblSource].[PrimaryKey] 'That is the field name in the source table.
SetField
Name REFERENCE
Value=[tblSource].[Reference] ' I think this should be all caps, but it didn't error.
SetField
Name EVENT
Value="Event 1 Completed" ' I want this to be text
SetField
Name OLD_VALUE
Value=[tblSource].[Event 1 Complete].[oldvalue]
SetField
Name NEW_VALUE
Value=[tblSource].[Event 1 Complete]
SetField
Name USERNAME
Value [GetUserName] ' I copied the function posted earlier to the BE module 1.
SetField
Name MOD_DATE
Value=Now()
End If

Thanks for your patience and assistance with this!
 
I think I found my error - on your info tab you say "I have used After Insert, After Update, and After Delete. Notice they are highlighted." I didn't notice, but I see it now.

Let me try again!

Okay - I don't see how to make my existing named macro work for all three events. I created it with CreateNamedMacro. If I go to Edit, I can't select Events. If I don't go to edit it, I can select AfterInsert, but it gives me a new blank macro named AfterInsert.

???

Figured it out, sorry - in the blank macro, I have to tell it to run my newly created named macro. Still may have errors, but I'm making progress!!!

Still not there yet - it is supposed to run my named macro from each event, but I changed the field in the table and nothing was written to the audit table.
 
Last edited:
You are saying it would work fine to store these values in text fields.
That's how SAP works. It isn't a good idea for a transaction database which is mostly what we create because it s a pain to to work with but for "dead" data, it's fine. You can store the date as a string - 1/4/2024 6:03:27 PM or you can store it as its double precision value - 45295.752337963 .
 
Marshall,
You said:
"I'm really struggling with the data macros:"

Yes, I know the feeling. That's why I struggled and built my demo database back in 2017.
You may find some info on youtube - I know Dr. Richard Hanson created a video on data macros.
It may help with context.

Have you gone through my demo database to see how things work using table data macros and a named data macro to write data to the tblAuditLog.

An audit log/file is often used to identify what changes were made to a database, who invoked the change and when did the change occur. In other databases audit logs and/or transaction files can also be used to reconstruct business transactions.

Attached in zip file are the Data macros associated with the tStudent Table and tblAuditLog in the demo database. Again it might provide some context. You can open the files in a browser, or you can use Notepad++ and use the XML plugin Pretty Print to see the xml.

Good luck with your project.
 

Attachments

Users who are viewing this thread

Back
Top Bottom