Solved How to update the temporal table with an insert query

nector

Member
Local time
Today, 18:57
Joined
Jan 21, 2020
Messages
462
Dear all good evening!

I have created an insert query in Ms Access for the purpose of updating the temporal table with equal fields. The data is supposed to be updated from tblProduct to tblProductTemple, the update here is supposed to only write new addition only not over writing the existing data.

If you check the two tables you will find that tblproducttemp is missing two records compared to tblproduct, now the goal here is to update only the missing record with the open event start up form NO duplicate must never be allowed, only clean record.

VBA Code

Code:
Private Sub Form_Open(Cancel As Integer)
DoCmd.SetWarnings False
DoCmd.OpenQuery "QryStockUpdates"
End Sub

Actual insert query require fine tuning not to allow duplicates


Code:
INSERT INTO tblProductsTemp ( ProductID, ProductName, Type )
SELECT tblProducts.ProductID, tblProducts.ProductName, tblProducts.Type
FROM tblProducts;
 

Attachments

I don't see the real-world need for this process. If you have a table that contains authoritative data, your work is done. Now you want to copy it to some other table inside your system? Why?
 
It is easy to add a WHERE clause to your SELECT statement.
Code:
INSERT INTO tblProductsTemp ( ProductID, ProductName, Type )
SELECT tblProducts.ProductID, tblProducts.ProductName, tblProducts.Type
FROM tblProducts
WHERE tblProducts.ProductID NOT IN (SELECT tblProductsTemp.ProductID FROM tblProductsTemp);

But why should you duplicate data that is already in your database?
And what if a product is changed in or deleted from tblProducts?
 
SQL:
INSERT INTO
   tblProductsTemp(
      ProductID,
      ProductName,
      Type
   )
SELECT DISTINCT
   P.ProductID,
   P.ProductName,
   P.Type
FROM
   tblProducts AS P
      LEFT JOIN tblProductsTemp AS T
      ON P.ProductID = T.ProductID
WHERE
   T.ProductID IS NULL
The NOT IN construct is a worse solution because it does not allow index usage.
 
Many thanks to you all and special regards to XPS35 and ebs17 this is exactly what I wanted , now why did I need this ? I have an AZURE database, now to increase the speed I want a few tables to reside in the front to help speed up the product combo box , now to update the front end table to all machines that code has to run at start up to copy data from a linked table to local table. This way I can assure you the speed is really great for users.

Final Part required also

The remain part, is it possible in a similar manner to update the price table as well just in case of price changes from AZURE Database , the sample code I thought was supposed to be like below but fail to work:

Code:
UPDATE tbpPricing SET tbpPricing.Price = tbpPricingTemp.Price;

I would not mind people waiting a little to login but once loged in they will be assured of quality speed, that is my goal. The above combos with 600 data load faster this way, I have personally tested them two days ago.
 

Attachments

Okay the answer if finally found see below:

Code:
UPDATE tbpPricing INNER JOIN tbpPricingTemp ON tbpPricing.PriceID = tbpPricingTemp.PriceID SET tbpPricingTemp.Price = [tbpPricing].[Price];
 
SQL:
UPDATE
   tbpPricing AS P
      INNER JOIN tbpPricingTemp AS T
      ON P.PriceID = T.PriceID
SET
   P.Price = T.Price
WHERE
   P.TimeStampChanged < T.TimeStampChanged
With the introduction of a timestamp for record updates, you can limit the action to changed records in the query. So far you update all (unnecessary, takes more time).
 
Last edited:
Why would you not just add a unique index to the table? THAT will let the database engine do the work and your query can just append everything. Duplicates will be rejected and that is fine.
 
A unique index is always good and recommended as a safeguard to ward off unwanted duplicates.
But why fire tens of thousands of records against an index when you only really have 10 records to add? Unnecessary effort, and index errors are errors too. Who intentionally makes errors?
 
I'm not arguing against proper row selection but there is a gate keeper to prevent duplicates. Why not use it?
 
Maybe your own arguments are easier to understand.
Select only the columns and rows you need for the task at hand

You know that firing duplicates at a unique index will consume auto values in the target table. That in itself is a disaster for some, but in any case it shows that something is happening in the table - the gate is much further back and not in front of the table.

In a case reported in a forum, which I can neither confirm nor refute due to a lack of information, there was real and considerable bloating of the database in such a case.

For me, however, the statement I made that index errors are also errors is sufficient. I tend to want to work error-free/low-error. This is also worth a somewhat longer SQL statement to me, the creation of which does not cause me any special effort.

The "gate keeper" is the insurance that is better not needed.
 
@ebs17 We all want to work error free. That doesn't mean that we should ignore a simple solution that prevents accidents. You seem to be arguing in favor of NOT defining unique indexes to enforce business rules which I find a little hard to believe.
 
You seem to be arguing in favor of NOT defining unique indexes to enforce business rules
You suck this conclusion out of your fingers, it cannot be found in any of my posts.
A unique index as protection against unwanted duplicates is known to work at table level and is therefore comprehensive and secure against all conceivable possibilities of table entries => bound form, action queries, recordset actions, standard imports, manual table entries.

But, big but: If I've built a firewall for protection for a good reason, I won't constantly set fires on this wall without necessity, because it's "easy" and means less work for me.

My clear statement throughout is: Use the index as insurance and additionally soft check for duplicates by own coding.
This is also often used with bound forms, where you check before updating whether there is already a table entry in order to give the user your own message instead of the less favorable system message.

In addition, I sometimes write more complex queries for more complex tasks, where it is also necessary to get duplicate-free records without a unique index. So there's no harm in having that ability.
 
Last edited:
You suck this conclusion out of your fingers, it cannot be found in any of my posts
You keep arguing for a SQL solution. No one is arguing against using SQL to limit the data to be appended. I simply pointed out that the database engine enforcing RI is the rule of last resort and you are arguing with me about it. Regardless of how you restrict the data with the append query, using a unique index to enforce a business rule when possible is a safety play that should not be ignored.
In addition, I sometimes write more complex queries for more complex tasks, where it is also necessary to get duplicate-free records without a unique index. So there's no harm in having that ability.
There you go again. Can we just agree to agree and be done with it? Feel free to have the last word.
 
You keep arguing for a SQL solution
This is a tendentious statement that is wrong in conclusion.

nector showed his experiments with SQL queries to do certain things with it. I showed corrected statements that will get him there. I plead guilty to answering the questions accurately.
 

Users who are viewing this thread

Back
Top Bottom