The theory here is that while you can surely store data in a table when that data is computed by some simple formula, you normally don't do that because of two factors:
1. If either of the data sources change and you didn't store a date with the computed value, then TECHNICALLY you have created a denormalization case. The computed value depends on something that is no longer (or never was) a key in the table that holds it (i.e. the date associated with the values in the source recordsets.) Big normalization issue.
When DO you store a computed value? Suppose you have a store that sells things with special promotions so your sale price has to be computed based on the promotions on a certain date. Or you say you will beat your competitor's price and your competitor is running the promotion, so it isn't really EVER in your database. Then, to save time and searching for the promotions data, just store a Yes/No flag that says "Promotional" and then store both the final price AND the date. That way, you can reconstruct things if you have to, but at least you have the date on which that price was valid. This doesn't break normalization rules.
2. On most modern machines, re-doing the calculation each time is VERY fast. It is cheaper in terms of overall resources to simply recompute the darned thing each time you need it rather than waste the storage space. You want to keep things very "tight" in the stored tables because the bigger a record in the table, the fewer records fit into a single buffer-block, and thus it takes more disk reads to scan the table.
BOTH of these concepts are designed to save you storage space and huge headaches. However, I'm sure that everyone thinks their cases are "special" and therefore should violate the rules. Which is why I explained the rules so you could tell why your case does or doesn't merit the exception.