Solved Dynamic Insert statement

Kingz

Member
Local time
Tomorrow, 00:07
Joined
Mar 19, 2024
Messages
63
I have a function whereby I am trying to create an insert into statement by saving the column names from a select and using that output to write the insert into. So I try to create a 2- dimensional array filling the first slot with the areas (651,652 and 801) and the 2nd slot with the column name, which I can get from a select.

So my function starts like this:

Function create_insert() as boolean

Dim arrCol(0 to 2,0 to 75) as variant
Dim A(0 to 2) as variant

A(0) = "651"
A(1) = "652"
A(2) = "801"

Set dbs=currentdb()
For i = lbound(A) to 2
Set rs= dbs.openrecordset("select...". & A(i))
Do while not rs.eof
ArrCol(i, 1) = A(i)
ArrCol(i, i+j) = rs(0).Value
J=j +1
rs.movenext
Loop
Next i

But I'm having problems with the dimensions of the array.
Because it's 3 tables (areas) that I'm getting the columns from I need to both information the table and column..
What am I doing wrong?
 
Last edited:
Are you getting any error messages?
 
Yes, I was getting an error 9, which is basically saying I'm out of the range.. Let me modify the question, to make it more concrete...I now have a dim arrCol(0 to 2, 0 to 75) as variant
Which is static, but I now would actually like the limit to be reduced to what's necessary.. So the recordcount of the 3 tables. Can I somehow make the limit more dynamically based?
 
Okay, that's what I thought. Take a look at the ReDim statement and see if that helps.
 
Are you just trying to get the recordset in to an array?

If so, you can just use the .GetRows() method of an ABODB recordset.

The one caveat is that it populates the array kinda unintuitively - first dimension is the field position, second dimension is the row number.

You can test with this simple example:
Code:
Function GetRowsTest(strSQL As String) As Boolean

  Dim iRow As Integer, iField As Integer, data As Variant

  With CurrentProject.Connection.Execute(strSQL)
    data = .GetRows
    .Close
  End With
  Debug.Print "Fields: " & UBound(data, 1) + 1
  Debug.Print "Rows:   " & UBound(data, 2) + 1
  Debug.Print
  For iRow = 0 To UBound(data, 2)
    Debug.Print iRow + 1, ;
    For iField = 0 To UBound(data, 1)
      Debug.Print data(iField, iRow), ;
    Next iField
    Debug.Print
  Next iRow

  GetRowsTest = Err = 0

End Function

Call from the Immediate Window (Ctrl+G) with:
Code:
?GetRowsTest("SELECT * FROM YourTable;")

You can also run a transpose function on the array to swap it around, but it might take a bit of time for a large recordset.

Why do you want to get a recordset in to an array anyway? Can you not just use the recordset object itself?
 
Are you trying to create a generic function? Because if you are not, you are making this way harder than it needs to be.

First, I would have the calling program create a string with the column names rather than making them another dimension in an array. Then, you still need two dimensions for the array. One dimension is the column values in the EXACT order you built the name string in. But the other dimension is data type. In order to build the append query string correctly, the function needs to know what the data type of each data field is in order to delimit them correctly in the string.

The code you have posted makes no sense. Why is the data hard coded?

Please explain your intention so we don't have to guess.
 
I've been using something based off of code MajP shared. This is only using one table versus the three you mentioned in your post, but could maybe be a starting point.
In the function below strTable is my source table, and nstrTable is the new/target table.

post here: https://www.access-programmers.co.u...lumns-for-adding-recs-to-access-table.300086/

Code:
Public Function MakeInsert(strTable As String, nstrTable As String)
Dim fld As Field
Dim strSQL As String
Dim strFlds As String
Dim strColumns As String
Dim td As TableDef
Dim db As DAO.Database

Set db = CurrentDb()
strSQL = "INSERT INTO " & nstrTable
Set td = db.TableDefs(strTable)
For Each fld In td.Fields
    If strFlds = "" Then
      strFlds = "[" & fld.name & "]"
    Else
      strFlds = strFlds & "," & "[" & fld.name & "]"
    End If
Next fld
strColumns = "(" & strFlds & ")"
strSQL = strSQL & " " & strColumns & " SELECT " & strFlds & " FROM " & strTable
'if using function to return string, remove db.exec and add as string after function name
'MakeInsert = strSQL
'Debug.Print strSQL
db.Execute strSQL
Set db = Nothing
End Function
 
Last edited:
@adhoustonj If you are copying all the columns from one table into a different table, why not just create a querydef and save it? If you are really changing the from and to table names on the fly, there is probably something wrong with your schema. Also, since your code is not ignoring the autonumber, you don't even need a field list.

INSERT INTO tbl1
SELECT tbl2.*
FROM tbl2;
 
@Pat Hartman I'm using it to fix Access BE tables before migrating to SQL server, and was working to automate it. I have a lot of db's to migrate and some are +200 table databases where it is a mix of not having PK or AutoNumber, some have AutoNumber with no PK, or PK and no AutoNumber, or.. nothing. So looping all tables, determining it fixes needed, make a copy of original table structure, set/add AutoNumber, add PK constraint, and then insert all data from original table to new table.
 
Last edited:
That makes sense. We'll wait to hear what @Kingz needs the functionality to actually do. He may be doing something similar. In any case, if you are copying all the columns including the PK, I wouldn't bother with building the field list and values.

I wish there were a Select * (without the pk) command;)
 
Ok, I'll try and elaborate...
Imagine I have a table, which has column "field heading", "area".
So the datasets are like this:
1.3.9., 651
1.4., 651

0.7., 652

0.9., 801
These are actually 3 separate tables.

From another table I have values for each field heading value_table has columns "intID", "field heading","weight" like this:
"4,1.3.9., 54"
"4,1.4., 89"
"4, 0.7., 8"

Now I need to create a dynamic insert into statement looking like this:
insert into main_table (intID,[1.3.9.],[1.4.],...) values(4,54,..)

So my idea was to get the info organised in arrays and churn the field names and values or into the SQL statement.

So it sounds like maybe my wax around the problem is questionable..
 
Last edited:
1.3.9., 651
1.4., 651

0.7., 652

0.9., 801
These are actually 3 separate tables.

Is it always going to be the same 3 tables that you are generating your insert statement from, and why 3 tables? Is it always 3 or sometimes more/less? It would be pretty cool if all 3 tables have the same structure to make them 1 table.

Instead of
tbl_A:
Field_HeadingArea
1.3.9.651
1.4.651

tbl_B:
Field_HeadingArea
0.7.652

tbl_C:
Field_HeadingArea
0.9.801

You could have
tbl_Field_Area:
SourceField_HeadingArea
A1.3.9.651
A1.4.651
B0.7.652
C0.9.801

It sounded like you just need to join your table with (field_heading, area) to table with (int_id, field_heading, weight).

Code:
INSERT INTO main_table (intID, field_heading, weight, area)
SELECT value_table.intID, value_table.field_heading, value_table.weight, area_table.area
FROM value_table
INNER JOIN area_table on area_table.field_heading = value_table.field_heading
 
insert into main_table (intID,[1.3.9.],[1.4.],...) values(4,54,..)
that suggests that main_table looks like this

intID1.3.9.1.4.0.7
454898

Besides being horribly de-normalized this is impossible. You cannot create Field names with "." in the name
Names of fields, controls, and objects in Microsoft Access desktop databases:

  • Can be up to 64 characters long.
  • Can include any combination of letters, numbers, spaces, and special characters except a period (.), an exclamation point (!), an accent grave (`), and brackets ([ ]).
  • Can't begin with leading spaces.
  • Can't include control characters (ASCII values 0 through 31).
  • Can't include a double quotation mark (") in table, view, or stored procedure names in a Microsoft Access project.
 
@Kingz Your restatement still does not tell us your actual objective. All we know is that you have a technical problem that you don't have the coding skills to solve. What you presented us with is YOUR solution with a request to make it work. Well, it's not actually that simple. The "solution" depends on what business problem you are trying to solve. Is this a one time event? Is it a repeating event where you have to append data every "week" from an external source and for some reason you think that your method is simpler than just making a "dozen" append queries? Are the tables always the same? Are you trying to make a tool for yourself as @adhoustonj was where he was doing a huge conversion with hundreds of tables? Are you trying to make a tool for the user to use? Each of these business problems could use different techniques to solve effectively/efficiently.
 
Well, I need to create this insert into dynamically every month. It sounds like creating a querydef is the best way to go about it.
I mean, all I'm trying to say is that I have information for an insert into SQL statement in different tables and I need to construct the SQL dynamically, and then push the information into a final table. Please don't worry about how normalized by tables are, it their field names. I have kept it simple for Illustrative purposes. The main problem is that the information in rows from the main table is what I need as column names for my SQL. In reality there are about 40 rows, hence 40 could from the 3 areas tables.
 
Ok, I'll try and elaborate...
Imagine I have a table, which has column "field heading", "area".
So the datasets are like this:
1.3.9., 651
1.4., 651

0.7., 652

0.9., 801
These are actually 3 separate tables.

From another table I have values for each field heading value_table has columns "intID", "field heading","weight" like this:
"4,1.3.9., 54"
"4,1.4., 89"
"4, 0.7., 8"

Now I need to create a dynamic insert into statement looking like this:
insert into main_table (intID,[1.3.9.],[1.4.],...) values(4,54,..)

So my idea was to get the info organised in arrays and churn the field names and values or into the SQL statement.

So it sounds like maybe my wax around the problem is questionable..
This has little to do with real databases; I would classify it as experimental programming.
But technically that's not a big challenge either.

1) Bring the records from the three tables together => UNION query
2) Use it to create a crosstab query
3) Using a maketable query, you create a new table from the crosstab query
There is little strenuous dynamic involved.
 
This has little to do with real databases; I would classify it as experimental programming.
But technically that's not a big challenge either.

1) Bring the records from the three tables together => UNION query
2) Use it to create a crosstab query
3) Using a maketable query, you create a new table from the crosstab query
There is little strenuous dynamic involved.
Oh wow.. That sounds great.. I'll try that outm.i need to look up a few things like "make table query"..I guess I can do that with code, ie VBA, too..
 
As @MajP said - if your field headers really are 1.3.9 , 1.4, etc then you can't use those values as field names / column headers. You would have to replace the "." and change to something else such as 1_3_9 , 1_4 , etc.

You can have field names with "." in them in SQL Server. You can not in MS Access. You can't use those names in a query, a table, or link to a SQL Server table.
 
and I need to construct the SQL dynamically,
Why? Why can't you use a saved query? I have many applications that import files on a regular basis. Usually the file names are different each month. In several of the applications, the process is even automated by ensuring the files get downloaded to a specific folder. Then all files in that folder are imported. In the case of the bank statements, the format is different for each bank. The bank is the first part of the file name so that tells me which query I need to use because for bank A, the format is always the same and the target is my transaction table. But bank B has a different input format so I use a separate query.

You still haven't described the business problem. You keep telling us YOUR solution.

If the data is temporary, the best solution is to import into a "side" end database. This is a separate BE that is recreated at the start of each input cycle. This eliminates the bloat caused by make tables or add/delete of rows.

As the others have said, if you are using Access, you cannot have column names that include periods.
 

Users who are viewing this thread

Back
Top Bottom