This speed test was inspired by an example by UA member cheekybuddha in this post
I've taken the same idea a bit further with another in my ever growing series of speed comparison tests
on my website.
See item 7 - Check Record Exists
As in the UA example, the tests compare the time required to check the existence of a specified record in a large data table using 4 different methods: DLookup / DCount / SQLCount(*) / SQL Select
There are two similar versions of this set of speed tests.
The main difference is the search field is NOT INDEXED in one and INDEXED in the other.
Of course, in a real world application, fields being searched regularly should normally be INDEXED.
The index increases file size but dramatically reduce search times.
In this example, the search time was over 250x faster using indexing but there are some other interesting differences in the results for each version.
In each case, a 'reference' table tblSource containing 10,000 different UK postcodes (deliberately kept small to reduce file size) is used to populate the test table tblData which is initially empty.
In order to get a large data table, those records are appended repeatedly
For example, 100 batches (default) of 10,000 records to give a total 1 million records in tblData.
One RANDOM record is then replaced by a 'dummy' postcode 'XM4 5HQ' used in the record check.
For info, this postcode is used to sort letters addressed to Santa Claus!!!
Each speed test is was done repeatedly by looping through the code multiple times and measuring the total time
The number of loops used can be varied with default=100.
So, in this case 1 million records are looped through 100 times i.e. 100 million records checked.
In the non-indexed tests, DLookup & SQL SELECT were similar to each other and faster than DCount or SQL Count(*) - often MUCH faster
The difference depended on the position of the record being searched (as you would expect)
For the indexed field, all 4 results were similar - SQL SELECT was always slowest.
DLookup & DCount were, perhaps surprisingly, almost fastest (though not by much)
The real power of indexing was confirmed by increasing the test table to 10 million records.
The times required were identical to those for 1 million records.
In fact, I also tested with 30 million records - once again indexing meant the total times were the same.
However it took several hours to create that ridiculously large test table and the database grew to 1.9 GB - so NOT recommended!
See attached for the 2 example databases and a PDF version of the website article
I would appreciate feedback on any of the website articles - some produce some surprising results e.g. HAVING vs WHERE
If you wish to respond, the easiest way is probably to email me using the link in my signature line