How to test program before releasing new version?

amorosik

Member
Local time
Today, 03:42
Joined
Apr 18, 2020
Messages
505
When a new procedure is created or even simply when functionality is added to an existing procedure, the moment of release to the end user is always a critical phase
Will I have thoroughly tested the new features?
Will I have introduced anomalies on functions that worked correctly before?
And so on
And so the question is: how do you effectively test a new procedure or a procedure already released and updated on new features?
 
I create a test script to test the mod does what it is supposed to do - nothing more and nothing less

Put the updated app in a test environment and get one or more users from the production environment to test it. If you have multilevel users (admin, supervisor, data entry, etc) get at least one from each level

any issues found are fixed and retested

once all tests are passed, the app can be released to production

often surprises me what users do - some always use a mouse, some only the keyboard. Some exploit features such as right click menus, others don’t, etc
 

Test classes / procedures​

Basic requirement: the code is designed to be test-friendly.
If this is fulfilled, one can cover relatively much with unit tests.

I use a test framework (AccUnit) I created myself that allows create tests similar to NUnit. (I write almost row tests.)
Example process: https://accunit.access-codelib.net/videos/examples/NW2-UnitTests.mp4
Rubberduck also contains a unit test framework.

Note: With accessUnit or accessUnit(Fork) these are all the test frameworks I know of that you can use in Access/VBA.
If anyone knows/uses of any others, I would be interested.

Test user interface​

I usually have the user interface tested by the "main users" who commissioned the customization from me.
I test it myself beforehand, of course, but that's too little. Users usually do it differently than you expect. ;)
 
Last edited:
Ok: tested HOW ?
This text is too difficult for me to give a concrete answer. ;)

What specific question do you have after reading Philipp's article (EN version), slides and watching the video?
 
Last edited:
Ok: tested HOW ?

This question is entirely too difficult for any of us to answer in detail. Not too difficult technically, but we can't get detailed in testing because we don't know the ins and outs of your design. You are the subject matter expert. You are the designer of your solution to the problem. Just reading your code won't tell us your intent, not that it matters, because we ALSO wouldn't know if your code actually aligned with your intent. All any of us will be able to give you will be broad comments and general guidelines.

I can tell you how I did testing for my big Navy project. First, I had a dedicated folder on the machine that was my in-house server. The folder was dedicated but the machine was shared with two other in-house projects. In that folder I created four sub-folders. They were DEV (for development), TEST (for testing), STAGING (for final-stage preparations), and PROD (for production). The users mapped to the PROD folder and never saw the other folders. Each folder included its own copy of the front-end file and a copy of the back-end file. (Except STAGING, which usually just used the production BE file.)

Whatever I did, I did to DEV first. Code it up, muck the displays, whatever... but it ONLY happened to DEV. Then when I was ready to actually do formal testing, I copied the FE from DEV to TEST, relinked to the TEST BE, and started testing. I developed a list of things that I always tested and then had the second list of things related only to the latest change. I.e. the list included the "always" tests and the "recent change" tests EVERY TIME. The list of recent work included testing to determine whether working on X broke something on Y that was a related form or sequence of queries. If something WAS broken, it was NEVER fixed in TEST. It was fixed in DEV and the testing process started over again.

Once the tests were complete and I had no more changes to make, I moved the copy from TEST (that was an accurate copy from DEV, because remember I didn't fix TEST, I fixed DEV). The FE went to STAGING. Here is where a split occurred in the procedures.

Most of my changes were either fixes for errors in the code or a query, or they were new features that were code-only or a new form and some code. The BE file was not usually involved in any changes. So in STAGING, I would relink the new version of the FE to the production BE. I would also update the FE's version information so that the startup code would detect whether users would be required to update. (I wasn't able, by Navy regulation, to use the auto-updater - much to my disappointment.) I would launch the STAGING copy to verify that it identified itself correctly and didn't crash. Finally, when the STAGING copy of the FE was good to go, I copied the current PROD FE to a backup folder and then moved the now-relinked FE from STAGING to PROD. Note that since the BE wasn't affected and the only changes were fixes and new features, the BE could stay "up" while all of this was going on behind the scenes. I had an "announcement" feature that would notify users of the new version and I could tell them if the "fixes" were important enough to do an update of the FE. Otherwise, failure to update just meant they didn't get the new features right away.

When the BE had to be updated, it got trickier because I had to have a list of changes to be made based on the PROD BE file. So when I knew that I was going to update the BE, I copied the PROD FE to become the new DEV BE as my starting point. This time the changes had to be made to both the FE and BE, so both of them got moved to TEST. But this time, I had to make a detailed list of changes to be made to the PROD BE file. Some of the changes could be made with DDL; others required DDL followed by an UPDATE query. This is when I needed to take the whole DB down.

Since the BE was linked by name, all I had to do was set up my "DB GOING DOWN" feature that would block people from doing anything during scheduled down time. When it was safe (i.e. no one in the DB), I renamed the BE to a place where I could make the changes necessary for the update. In this "down" period, since the BE was not where it was supposed to be, nobody could use it anyway.

The FE changes were done as described earlier. The BE changes were made in that alternate folder. (You guessed it... STAGING.) When I moved the updated FE and updated BE to PROD, I set up the internal startup checks to recognize that the users were FORCED to do the update by copying the FE file. And the FE file knew where it should NOT be when it was running, so no one was ever able to run the FE directly from PROD. They HAD to make a copy.

That was the logistics of testing. Now, the other part of this: So you have this fancy setup for testing. But WHAT/HOW do you test? The answer is that during your design phase, you made lists of what your DB had to do. You worked towards some performance or operational goal. The tests HAVE to be that your code continues to provide whatever was specified as the design goal. This is where a "design document" becomes INVALUABLE. If you implemented to a given design goal, your test is that you still achieve that goal with your new version. AND - if you are meticulous about it - you have modified the design document to include any new goals.

Now, the sad reality comes in. Entirely too many people don't have the self-discipline (or work-mandated discipline) to do anything but "shoot from the hip" - a USA idiom from our "wild West" days. In modern terms, it is when we do not work in an organized and careful manner. We just "take our best shot" at a target that we might not have defined very well. This comes back to haunt you the MOMENT you have to do any testing. Because if you weren't organized about what you updated, you won't be able to organize or design well-directed testing. Entirely too many people fail to define their goals ahead of time and then find that when it is time to do testing, they are operating in a vacuum.

Which brings us back to the question: How do we do testing? By remembering at the start of the project that you will still have testing to do at the end of the project, and thus making plans not only for the development, but ALSO for the testing of your project creation or update. Good testing starts with good design. Impossible testing is derived from having no design.
 
I understand what you mean, and thank you for sharing your method
But to think that the single person can test the foreseen functions and verify that all the possible rounds are respected is conceivable only for projects of minimum dimensions, by minimum dimensions I mean connections to db under 100 tables, less than 100 forms and some modules with some functions
For more complex projects, I think it would be utopian to test whether the foreseen functions behave as requested, because all the tests that will be performed will certainly not be exhaustive
So what I was asking is, assuming starting with a new project, which tools/methods to adopt to maximize the possibility of testing the code, possibly automatically, before releasing it to operators who will use it in production
The ultimate goal is to highlight possible errors in the code, allow them to be corrected, and possibly eliminate regressions
 
So what I was asking is, assuming starting with a new project
that is not what you asked.

You asked
how do you effectively test a new procedure or a procedure already released and updated on new features?

Think you need to clarify exactly what you mean. And to be clear since you tend to be rathe vague about what you are actually using in the context of your questions, are you talking about an access BE, Sql Server? something else?. And is it safe to assume you are talking about Access/VBA or some other platform/language for the front end?
 
I also use a similar folder setup as Doc. Changes always originate in Dev and move forward.

How you test differs between interactive processes like forms and batch processes such as printing monthly invoices. Very few Access applications that I have developed actually have batch processes. Pretty much everything is at least initiated by the user by doing something on a form. Back in my COBOL days we actually had a test harness for our CICS transactions (similar in concept to forms) We could prepare sets of inputs. Then we would run a batch job which would, using the "harness" enter each individual set of data and press whatever button the test indicated. The form would run and the output would be printed instead of displayed. I have never seen anything like this for Access. I have seen "harness" type code that would run batch processes.

For forms, at a minimum, you have to check your validation logic for each control by entering bad data in the control and determining what message you get or does your form save the bad data.

For batch processes, you need to create a variety of records in various tables. You should also create the output file you expect. Then you run the batch process and compare the resulting output with the file you created manually. You also need to do testing at scale where you add a lot of test data to every table so you can ensure that your forms, etc don't slow to a crawl once you get to a half million records. You can create the data yourself or you can use one of the products that will create logical sets for you. If I were doing more development work, I would buy one of those products. They do an excellent job of creating coherent data where you have items on orders and orders for customers etc.

Then you also need to release the QA version of the app to a select group of users for systems testing. Copy the production BE to a testing folder and link the test FE to the test BE and distribute test FE's to the test user group. I have the switchboard change the background to a garish color whenever the BE is not in the production folder on the server. This is intended to prevent the user from thinking he is working in the production database and hopefully prevent him from losing data because he entered it in the wrong application.

Give the users an outline of the changes you made and the objects that they specifically should concentrate on although they should still do sanity checks on other objects.
 
Last edited:
that is not what you asked.

You asked


Think you need to clarify exactly what you mean. And to be clear since you tend to be rathe vague about what you are actually using in the context of your questions, are you talking about an access BE, Sql Server? something else?. And is it safe to assume you are talking about Access/VBA or some other platform/language for the front end?

The need to test a product, before releasing it to the end user, is obviously a real need both for a new product and following an update to add new features
After reading The_Doc_Man's answer "..but we can't get detailed in testing because we don't know the ins and outs of your design.." I thought about discussing a "new project" to simplify things and eliminate any obstacle due to code past present
I'm talking about the front-end of an application created with Access
 
I'm talking about the front-end of an application created with Access
I addressed that in my comments. When you are testing code, you ALWAYS need to know what value the code should return.
 
I'm talking about the front-end of an application created with Access

I was, too. We have an old saying about projects: The devil is in the details. You've got an Access application and you are talking about the FE? Great. But what did you expect us to do about it? We have no idea of the applicable business rules, regulatory environment, and potential thirst for vengeance from disgruntled users.

So what I was asking is, assuming starting with a new project, which tools/methods to adopt to maximize the possibility of testing the code, possibly automatically, before releasing it to operators who will use it in production

I have seen a few large-scale software testing tools that could be used for major code projects. But most of them don't target something as (relatively) small as an Access FE. The Navy actually had a few testing products that they could use to automate testing. But guess what? The developers still had to design the tests. The automated products usually do no more than run a script that supplies specific inputs and looks for specific outputs - tests which only the developers can design because only they know what was originally intented. The ONLY value that most testing products add is that they take good statistics over repeated runs and make pretty reports.

I have also seen some products that can look at code and analyze program flow patterns by doing a topographical analysis. But that STILL doesn't help you to know whether the code you wrote actually addressed the problem you have. You can apply artificial intelligence in the testing tool, but there is a rule about that: Artificial intelligence cannot cope with natural stupidity. If you design a boat like a solid lead ingot, it will still sink when you launch it no matter how many bells and whistles you add.

The problem here is that the logic of YOUR application is YOUR baby and YOU have to rock it to sleep. So I revert to my original comment. If you don't build a design document with stated behavioral and result goals, you will be FOREVER plagued with difficulties in testing and debugging. There is NO SUBSTITUTE for careful prior planning that includes test guidelines.
 
I addressed that in my comments. When you are testing code, you ALWAYS need to know what value the code should return.

Sure, but testing manually ALL the form/report/code is extreme time-consuming, in some case not really possible
Then i find a system to help the test phase
 
Forms and reports are very difficult to test automatically.
However, you can design the code in such a way that the "critical" code is independently testable.

For example, I use the SqlTools class to generate SQL text and filter expressions. For the methods of this class I created unit tests. With this I make sure that the requirements are fulfilled and remain so in case of later adaptations. So I secure the interface and the behavior of the class by tests.
If I now use this class in other procedures, I can rely on a functioning according to the tests.

If I want to add a new feature to SqlTools, I first write the test that describes the requirement and then adapt the code of SqlTools (TDD).
However, I must confess that I do not consistently implement TDD in application development. It would be good, but the implementation in practice is not always ideal. ;)

Note: writing the tests requires just as much attention/concentration as writing the actual code.

SqlTools class:
Test classes for SqlTools:
 
Last edited:
Thank you, Josef P., for corroborating my comments regarding testing:

Note: writing the tests requires just as much attention/concentration as writing the actual code.

Sure, but testing manually ALL the form/report/code is extreme time-consuming, in some case not really possible

It was possible for you to write all of the code and build all of the forms originally. Was that not really possible? I'm not trying to bust your chops here, I'm just pointing out that some things are unavoidable - like designing your testing when you design your app. But there IS one other way to deal with this. We should remember Ed Murphy, he of Murphy's Law, who said: "Anything that CAN go wrong WILL go wrong." But folks forget what else he said afterwards: "So design it so that it cannot go wrong." If you build in error traps and self-testing code, you will do far better for yourself because your error traps can log places for you to examine for errors.

There exist many design guidelines for building large apps. One that is ALWAYS valid and ALWAYS good advice is "Divide and conquer." Design your app so that you can subdivide it into discrete phases or functions or subject or whatever, so that you can reduce the scope of attention required to address an issue. You ALWAYS want to address smaller issues because they are ALWAYS easier to debug.

I sympathize with your problem, but it is a general rule that bigger projects need more detailed testing. You cannot avoid that simple fact of life.
 
I sympathize with your problem, but it is a general rule that bigger projects need more detailed testing. You cannot avoid that simple fact of life.

Of course, it is certain that it is necessary to test a project adequately, especially if it is of large dimensions
What I'm trying to undertand is if it's possible to automate a part of this process
 
What I'm trying to undertand is if it's possible to automate a part of this process

Try this: Do a web search of the exact phrase following this sentence and then do some follow-up exploration: "Automated software test tools"

You will get dozens of different product web-hits. I make no claims regarding any of them including whether any of them can work with MS Access. SOME of the articles claim the ability to test GUI applications. As a further disclaimer: To the best of my knowledge I have no financial interest in ANY of the products that come up during that search.

The part that you can automate is repetitive testing and result gathering. You will STILL need to come up with test definitions because just like Access knows nothing about your business, neither will the testing software products.
 
"Automated software test tools"

[...] I make no claims regarding any of them including whether any of them can work with MS Access.
Most can't.
Generic testing tools are primarily UI-Automation tools. Most of those struggle with Access because Access UI controls are, other than most Windows applications' controls, not actual windows. They are just drawn onto the screen. This makes it very difficult to automate an Access applications' UI from the outside.

I used to use Visual Studio Coded-UI tests (now deprecated/removed) in the .Net/WinForms context. It was quite tedious to create those Coded-UI tests and they were very fragile. Small changes to the UI or slight lags in the response time of the application would break a test either temporarily or permanently.

It think, it is much more worth your while to put effort into code level tests like unit tests or integration tests.
 

Users who are viewing this thread

Back
Top Bottom