I did ask if he needed to do updates or if the exchange was simply for new records and I mentioned the two choices when only adds are relevant. What is he supposed to do if there are "duplicates" in the input file? Should the entire file be set aside unless it is corrected? Whenever two applications exchange information, there needs to be a protocol for validation and acceptance of the data. That has not been defined by this poster. I am merely emphasizing the importance of the unique index. NO MATTER WHAT ELSE he does, the index needs to be added. The prevention of duplicates is the job of the database engine. You might do code validation so you can give the user better error messages but the buck stops with the db engine. And if you don't assign this task to the engine by creating a unique index, you are a fool. I am trying very hard to not equivocate
I don't want there to be any confusion in the mind of anyone. If you need to prevent duplicates, you need a unique index. The rest of the process is workflow.
This is what the OP said:
Clearly the reply didn't give you the warm fuzzies either but you didn't insist on the index. That is what I am doing. You reiterated the importance of running the find duplicates query. Running the find duplicates query is information but it does NOT prevent adding duplicates. THAT is the important part of the process. We don't even know if anyone cares about the duplicates. They probably do so I would also run the query but it is far more likely that the duplicates are actually there because some piece of other data needs to change. That is why I asked yet again about the need to update existing data.
I would probably go further. Not only would I check for duplicates but if there are duplicates, I would compare each associated field. If they are all the same, I would simply ignore the duplicate. Why make work you don't need to. If they are different, then there is something amiss and it needs to be corrected.