Doc / CJ
you both made some very valid points.
The postcodes table does contain several indexes in order to speed up searching such a massive table.
Normally I only need to repopulate the table every 3 months when new data is released by the UK Office for National Statistics. I have tried removing the indexes before repopulating and it is indeed faster but as its not needed that often I just leave it running whilst I do something else
Its effectively a read only table at other times so I don't have to worry about the 1.6GB BE database growing in between updates. There is a second & much smaller BE file with tables that are updated by user actions e.g. downloaded addresses for selected postcodes, nearby places searches etc
Yes i did populate then delete, compacted, then repeated.
For info, times were similar.
As the large postcodes BE doesn't get altered by user action, the data & indexes don't get fragmented.
Following your comments, I've just repeated the tests on a student attendance marks table with 20 fields and 1.33 million records. This is regularly updated each day so is I assume also highly fragmented. This time I emptied first then repopulated. I did each of these 3 times. The code was the same as that in my previous post
Emptying was so fast that the time was recorded as zero twice with 0.016 s in between those two.
Repopulating from a linked SQL table took 112 s then 86s then 88s. The table had only 2 indexed fields at this point
I deliberately didn't compact between tests so the db grew from 248MB to 900MB
See attached text file for results if interested
UPDATE:
I then compacted, indexed a further 6 fields and repeated the tests 3 more times
Deletion times relatively unaffected (0s / 0.164s / 0.031s)
As expected, repopulating times increased significantly (126s / 163s / 127s)
Of course, indexing increased the table size as well so the db grew from 278MB to 1.36GB
Results also attached in 2nd text file