In fact I basically put on the computer what we use to do with cards in boxes
A good first step.
Now, the next step is to look for sparse-field possibilities - a case in point being spousal data for an unmarried person. Split the table to separate the "always populated" from the "sometimes populated" and save some space. A JOIN query can be a recordsource just as easily as a table, so you lose nothing - except a shorter table that, when you scan it, scans quicker as long as you don't need to sparse data to be included. See, the shorter the record, the more you fit in a working buffer. The more you fit in a working buffer, the more things you can scan in a single disk read, even when doing a non-indexed search. And there is your performance pay-off, the reason why you look at this type of normalization.
Before you say ... "But I always need that data!" - no, you don't. You only need that data when reporting on spousal issues, searching for persons having spouses, or viewing detailed records. But if you are sorting based on postal codes, you do that sort in one query (thus faster 'cause shorter records) and join the spousal data to the query, not the original table.
THIS is what normalization gives you - the ability to isolate on what is important at the moment; the ability to do as Julius Caesar did so long ago - divide and conquer. Paraphrasing, "Omna data in multa parta divisum est."
If I've forgotten the proper declension in a couple of cases, forgive me - my Latin is high-school variety and that's been ... well, a few years, let's say.