Compile error 64-bit system

Just one BIG caveat to that advice.
APIs adapted to work in 64-bit Access will also run in 32-bit Office from A2010 onwards (VBA7)
For example, below I'm using conditional compilation so these 2 APIs will work in all versions and bitnesses:

Code:
'###############################################
'Updated by Colin Riddington - 26/01/2019
#If VBA7 Then 'A2010 or later (32/64-bit)
    Private Declare PtrSafe Function FindWindowA Lib "user32" _
        (ByVal lpClassName As String, ByVal lpWindowName As String) As LongPtr
       
    Private Declare PtrSafe Function SetWindowPos Lib "user32" _
        (ByVal handleW1 As LongPtr, ByVal handleW1InsertWhere As LongPtr, ByVal w As Long, _
        ByVal X As Long, ByVal Y As Long, ByVal z As Long, ByVal wFlags As Long) As Long
#Else 'A2007 or earlier (32-bit)
    Private Declare Function FindWindowA Lib "user32" _
        (ByVal lpClassName As String, ByVal lpWindowName As String) As Long
       
    Private Declare Function SetWindowPos Lib "user32" _
        (ByVal handleW1 As Long, ByVal handleW1InsertWhere As Long, ByVal w As Long, _
        ByVal X As Long, ByVal Y As Long, ByVal z As Long, ByVal wFlags As Long) As Long
#End If
'###############################################

But unless you also need to cater for users with A2007 or earlier there is no need to worry about conditional compilation so you just need this:
Code:
 '###############################################
Private Declare PtrSafe Function FindWindowA Lib "user32" _
        (ByVal lpClassName As String, ByVal lpWindowName As String) As LongPtr

Private Declare PtrSafe Function SetWindowPos Lib "user32" _
        (ByVal handleW1 As LongPtr, ByVal handleW1InsertWhere As LongPtr, ByVal w As Long, _
        ByVal X As Long, ByVal Y As Long, ByVal z As Long, ByVal wFlags As Long) As Long
'###############################################

In most cases, conversion isn't that difficult but its tedious and can be time consumimng.
So, if I were you, I would start converting your projects to work in both bitnesses as & when you have time to do so.
Look at example apps that have been converted.
For example, all of mine are supplied with conditional compilation

At some point, you will indeed have to deal with the situation.
Better to be prepared in advance rather than face the situation I had in 2014 of having to convert several huge applications in a two week timeframe ...when I really didn't know what I was doing
 
Surely, reverting to the 32 bit version is just going to delay an inevitable crunch when MS stops supporting it? Along with creating, potentially, additional work in the future.

Though modern machines are, more often than not, based on 64-bit architecture, there are tons of 32-bit systems still in the wild providing MS with a suitable market to continue to justify 32-bit support. Not to mention that the 64-bit machines are merely extensions of 32-bit
systems so there will be no cases of any merit in which a particular 32-bit package is missing an instruction because you are on a 64-bit platform.

As to work in the future, one would hope that eventually MS would "catch up" with the libraries they didn't convert yet. But here is the secret as to WHY they haven't converted everything. You see, about half of the utilities they run, including nearly ALL of the utilities that are accessible from the CMD prompt, are still 32-bit programs for which the 32-bit libraries are still needed. For them to abandon 32-bit systems in Office, it would imply that they had actually gotten around to "fixing" all of their own behind-the-scenes programs. Want to see how many there are? Start up your Task Manager and switch to the Processes tab. Now click on process name in the header area, which sorts processes alphabetically by name. Scroll down to SVCHOST.EXE and start counting. Most of those are SVCHOST 32-bit and are there because of all of the unconverted support libraries that Windows itself uses.
 
You'd think by this time they would have gotten better at this. We went from 8 to 16 to 32 to 64. Each upgrade has caused problems. I think Windows came in at 16 but that was a long time ago. I think all we are talking about is the size of internal registers and variables. How hard can it be to upgrade libraries with well tested code?
 
Watched the Richard Rost video recommended by jdraw and had it sorted fairly quickly. Thanks to all for their input :love:
 
Last edited:
You'd think by this time they would have gotten better at this. We went from 8 to 16 to 32 to 64. Each upgrade has caused problems. I think Windows came in at 16 but that was a long time ago. I think all we are talking about is the size of internal registers and variables. How hard can it be to upgrade libraries with well tested code?

Size AND NATURE of internal registers, Pat. On the Intel boxes that became part of the PC, initially they didn't have memory management because the 16-bit bus could only handle 64 KB of physical memory anyway. The PC has gone through several memory models based more on internal memory bus size than anything else, though of course the processor chips ALSO grew. But we are lucky in a way. The FIRST Intel chip that could be programmed for device sequencers (think: automated laundry units) was the 4004, that was - you guessed it - based on a 4-bit "nibble." (A nibble is, of course, smaller than a byte.) Upgrading libraries in the days of the 16-bit machines was radically different than when you had 32 bits to play with. The problem is ALWAYS that program demand for space increases faster than the the space that can be supplied. With 64-bit addressing in the hardware, though, a LOT of neat tricks are available. The only catch? The modules were written to take advantage of the tricks of 32-bit space in order to make things fit, so programs had to be written cleverly and with an eye towards not wasting any space.

For example, if you write a MAIN program, your code will start above 1 MB because the first MB of memory is reserved for memory management structures. The Access 2 GB limit stems from having MSACCESS.EXE in the low half of memory along with the DLL files - libraries - and the program heap and stack structures. The tables and other database objects are in the high half of memory. But remember, that is half of a 32-bit addressing space, which turns out to be ... 2 GB.
 
For an expert, I would have thought it would be relatively easy to copy the lines, comment them out and amend for 64bit?
It is only the API calls after all, is it not?

I am on 2007, so no big deal for me. :)
Again, if you want Microsoft to start modifying your code for you, you're braver than I am.
 
Again, if you want Microsoft to start modifying your code for you, you're braver than I am.
I should add that I'm not referring to the technical ability to do it, but rather to the practice of having any outside actor unilaterally take responsibility for re-factoring your code according to that actor's opinion of what the code should have been. Rewriting APIs seems like a special case of a more general practice.
 
don't have to support any users still on 32 bit, then converting now is fine.
Once the changes for 64bit compatibility are tackled, it usually requires only minimal additional effort to stay compatible to 32bit. So, this is no real reason to not convert to 64bit.
 
Yes, you can modify the code to work with both 32 and 64-bit API's but the problem is if you have to distribute .accde's. You need to create them with 32-bit Access.
 
Doc, I understand that stuff. It is very similar to the way memory allocation worked in the days of the mainframe. going all the way back when we had three fixed partitions and so only three programs could run at one time and we had to assign the partition when we compiled the code. It was very exciting when we upgraded to MVS and memory allocation became variable so lots more programs could run at one time and the partition your program loaded into was decided at run time by the OS's loader program. Even in the ancient history of the mainframe there were still programs like Terminate and Stay Resident ones that always had to be loaded into high (or was it low) memory because otherwise they would interfere with the variable part of memory where programs came and went. Compiled code is always relative to an address. There really is no other way to do it. The compiler converts all variables to a register + offset. The register doesn't get filled until the code is loaded into memory. Then the compiled code in memory works with the loaded register which is the address where the code got loaded + some offset to reference every variable.

We are not talking about changes to the OS, we are not talking about changes to compiled code, we are talking about changes to the declaration of variables in source code. What other code in the program would have to change? Why is changing the library code harder than changing a variable you defined in your app from single to double? You might have a bunch of places to change the size if you have defined work areas but there are no logic changes.
 
We are not talking about changes to the OS, we are not talking about changes to compiled code, we are talking about changes to the declaration of variables in source code. What other code in the program would have to change? Why is changing the library code harder than changing a variable you defined in your app from single to double? You might have a bunch of places to change the size if you have defined work areas but there are no logic changes.

Granted, a lot of .DLL files internally use relative addressing modes, which ARE supported on an INTEL chip that has a PC register and which operate perfectly well even when the offsets are 16-bit or 32-bit. I think it may be my cautious nature (along the lines of "once burned, twice shy") that make me hesitant to suggest that the conversion is easy.
 
Unless a program always loads at address x, then base + offset is the only way to reference variables. I spent a lot of time reading assembler language code in my COBOL days since that is how we debugged. Debuggers weren't available until the 80's at least.

It must not be that straightforward or it would have been done. Maybe it is because the code is all written in such a low level language that there are "unintended consequences" when you change the definition of a variable.
 

Users who are viewing this thread

Back
Top Bottom