Joining large tables together

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 23:28
Joined
Jul 9, 2003
Messages
16,691
I have a small problem. I keep running into error >>>> Record is too large. (Error 3047) <<<<< I have tables similar to these and I want to join them together. I'm not talking about vertically, I'm talking about side by side. In other words:- The new table will have the same number of rows and contain all of the columns from the original tables. I have tried to do this various ways and I keep running into the problem >>> Record is too large. (Error 3047) <<<<

Basically I am writing a Blog on the issue. I can't find any way of joining large tables together. Before I commit Pen To Paper saying it can't be done, as I have done occasions and then found somebody knows something I don't! Basically this is an exercise in avoiding a red face!
 

Attachments

Last edited:
Re: Joining large tables together.

By the way I should mention that I have found one way to do it. By converting all of the fields into Memo Fields, for some reason it works if you do that.
 
Re: HELP! - Joining large tables together

Can you not join them using a SELECT query
 
Re: HELP! - Joining large tables together

Can you not join them using a SELECT query

Yes thanks Bob, that works. However I wanted a table... The real interesting thing is, you can join it with a query but you can't create a table.... it doesn't make sense? I'd like to know why it doesn't work... And why does it work with memo Fields?
 
Last edited:
Re: HELP! - Joining large tables together

I expect that you have exceeded one of the database limits

The problem with the limits idea is that it works with memo Fields. Why with memo and not with text?

You would think that with the fields designed correctly with the correct text format and the correct number of characters to fit the data, you'd think that this would take up less space than tables with memo Fields?
 
Re: HELP! - Joining large tables together

The problem with the limits idea is that it works with memo Fields. Why with memo and not with text?

You would think that with the fields designed correctly with the correct text format and the correct number of characters to fit the data, you'd think that this would take up less space than tables with memo Fields?

True, but even so, why on earth would you want to do this?
 
Re: HELP! - Joining large tables together

To me the obvious question is why would you want to do it anyway!

This was my last test of the issue after two days of work.

A job which should have taken half a day took me two days!

I am writing my own routine for importing comma separated value files. The first stage creates a table using the headers out of the comma separated value file. Using these headers for field names, I create a table with one Auto number field and the rest, memo Fields. I reckon this is the best choice for importing text from a comma separated values file

Next I copy the data from the comma separated value file into the newly created table. Then I run a routine to ascertain what type of data is in the newly created MS Access table. Next I run another routine to change the fields to the correct type, the type that most closely matches the data that's in each field.

This process worked fine on comma separated value files with 60 or 70 columns however once you get above 90 columns everything works fine, up until you try and alter the type of field.

This is the first time I saw the error... Error 3047 there's an article here about it:-

You receive a "Too Many Fields Defined" error message when you save a table in Access 2002

It's not quite the same error message, there are about three error messages which all seem to relate to the same problem. (I may be wrong)

Re:-
From error message:-
-2747217887 record is too large

Error Number >>>>> 3190
Error Description:- Too many fields defined.

And:- Error 3047

I thought the problem was trying to change the field types. To get around this perceived problem, I modified my code so that instead of creating the table with memo Fields it created the table with the correct Fields, but this failed as well! When I tried to fill the table with the data, the same error? cropped up again.

I realised that one solution would be to import into multiple tables with around 45 columns each. By this method I can import any amount of comma separated value data as long as I split it into several tables. Then it occurred to me I would need to join those tables together to recreate the comma separated value file. So I did an experiment and much to my surprise the error cropped up again! I'm thinking Bob Fitz idea is the best way forward....
 
Re: HELP! - Joining large tables together

Uncle G,
I just noticed you had updated the thread, after I sent my original message re (what method did you use).
I found some similar threads re 3047 via Google. Seems to be some consensus as to the number of bytes in record???

Here is one --you've probably seen it.
https://stackoverflow.com/questions/11190256/record-is-too-large-ms-access-runtime-error

I'd also like to know if a practical need/application for this other than academic interest.
 
Re: HELP! - Joining large tables together

I'd also like to know if a practical need/application for this other than academic interest.

I wrote my own MS Access program for managing my bank statements.

I import from several different layouts of comma separated value files, and I wanted to automate this process. That's how it started! I got this working lovely and I realised that was a slight modification to my existing code it might be / would be possible to import comma separated value files with over 255 columns.

I created a nice big sample comma separated value file with a spreadsheet which is where that sample data come from. I thought if I could solve the "more than 255 columns" problem I would have something that might be of use to other people.

What started out as half a days work, (and having a bit of fun) turned into two days of head scratching! I haven't got to my goal, however I did develop some useful routines on the way. I've got one that creates a table from information stored in another table. This information table can be generated with a routine that extracts information from a comma separated value file. So all in all I reckon I wasted about the day.

If I could suss out the issue and learn something from it, then I don't think that day would be wasted. But now it's time to walk away from it, maybe come back to it in a month or two; or after I read something somewhere on the web that gives me that AH! that's the problem moment....
 
Re: HELP! - Joining large tables together

Seems to be some consensus as to the number of bytes in record???

I arrived at the problem from a slightly different angle. I had created a table with memo Fields, then filled it with the data from the comma separated value file.

The error didn't appear until I started changing the field types, a bit like in the Microsoft example I point to in an earlier thread. The tables are fine with memo Fields they are not fine with other types of Fields. But the error message is confusing because you've got this table full of data, a happy, stable, working table.

Then the error message says that you've got too much data, but only after you've changed a field from a memo field to a text field. You would expect, with a limited number of characters, in other words you would assume that this was a better situation than having a memo field .... why would it throw an error...

It appears that whatever is checking how much data you have in the table ignores the memo Fields. When you change from a memo field to a text field then that trigger's the error. But I don't understand why?

Why have the error in the first place? You've got this table that works fine with lots of data and lots of columns in memo format, what difference does it make if you change it from memo field to a text field? Why allow memo fields and not allow text Fields?
 
This is a hazy issue in my old memory because it has been a long time since I read the article that explained this.

The reason Memo fields are so special is that in essence, Access creates a POINTER somewhat like a bookmark (but NOT REALLY a bookmark) in the Memo field pointing to the extended data of the Long Text field. It is analogous to having a Memo Object data type in a record. A Short Text field's size is evaluated based on the maximum size you said it could be and the test for "Too many characters" is based on your declaration of Short Text length. This is done because a record cannot exceed 2048 bytes.

However, a single Memo field can easily get up to 65535 bytes (64K - 1). When you convert some of the Short Text fields to Long Text fields, you actually SHORTEN the record slightly because the "object pointer" (probably like Windows scalar string descriptor) is only 8 bytes, 4 of which are an address, 2 of which are size, and 2 of which are status info. By making a short text become a memo, you allow Access to SPLIT the record into memo and non-memo parts, thus bypassing the 2048-byte limit on individual records because that limit only applies to the non-memo parts.

That maybe goes a long way towards explaining why it works when you split the field into memo parts. This ALSO explains why you can't index the contents of a Memo field. The memo isn't part of the main record.

"Too many fields" and "Record too long" are both symptoms of something that has to be rather suspicious. If it were anyone but you, Uncle G, I would say to check carefully about the implied table's normalization. But you know about that so I'm going to assume that a rigorous analysis says you need to do it this way.
 
That makes a lot of sense Doc Man.

I noticed somewhere on the interweb, also mentioned --- OLE fields were not counted, again another object.

With regard to breaking design rules (and I realise you're being "tongue-in-cheek" with your comment) my idea was to import the comma separated value text into Access and then write some routines to separate it out automatically. When this automation process failed, ask the user questions.

Going back to the bank statement (where I originally started) the imported data would be examined for obvious Fields names "Account" "ACC" basically the data you'd need in your bank statement table. It's working fine for my small bank statement type comma separated value information.

I'm not going to pursue the idea of importing large comma separated value stuff anymore; well I might have a look at it in the future when I have got over my indignation at being thwarted by Microsoft once again...
 
If I may be so bold, have you considered READING (as a file) with each long record as a string, then using the SPLIT function on the comma as separator, and parsing it that way?

Concur with your interpretation of OLE fields being separate from the record. Again, a case of a pointer of some internal type to the actual object, which I believe has an internal data type of "BLOB" (Binary Large Object).
 
have you considered READING (as a file) with each long record as a string, then using the SPLIT function on the comma as separator, and parsing it that way

That sounds exactly like what I am doing unless I am missing something?

Your question encouraged me to think about the situation from a different angle. Due to the use of the split function I have the individual field name and the individual field content. Having this information dictates the next step, which is to individually add to each field, each row, one at a time, the data.

An alternative method would be to use an insert into, update SQL statement. I'm guessing this would blast all of the data straight into the row in one single operation and might avoid triggering the error message.

I'll give that a go and report back.







Sent from my SM-G925F using Tapatalk
 
That sounds exactly like what I am doing unless I am missing something?

More likely, I just didn't read that from your description. But from what I did read, the question still comes down to this: You want to get data from a string to a table but there are essentially constant "markers" (the field names) in the string that don't need to be stored. Instead they tell you what data you have following (?) the marker.

This is just my upside down and arse-backwards logic rising to the surface, but the parsing step probably ought to precede the storing step since storing the keyword markers is kind of redundant. You have been around long enough to know how to use various VBA methods on a TableDef to add a field and how to use dynamically build DDL strings to add a field as well.

If I were to do this, I would:

1. Read the long string from a file, keeping it in a string variable.

2. Split the string based on its delimiters (you said "comma", that's as good as any other)

3. Step through the split array to see what field names are present. Where a field is one not seen before, add it to the table. (VBA or DDL, your call.) Always add the field name to a string that will be part of the next INSERT statement.

Code:
stUPD = "INSERT INTO MyTable (field1, field2, etc."  'NO PARENTHESIS AT THE END!
stUPD = stUPD & "," & NewFieldName"

4. Add the data to a string that will be used as the VALUES list for the next UPDATE statement,

Code:
stUPDV = "VALUES ('value1','value2','value3'"    'AGAIN, NO PARENTHESIS
...
stUPDV = stUPDV & "'" & NewValueString & "'"

5. At the end of the input string, add the closing parentheses to the INSERT string and concatenate the two parts. Execute the result.

6. Lather, Rinse, Repeat until clean.

The only things that could go wrong would be (a) if you have too many fields, 255 is still the limit, or (b) if the resulting record would exceed 2048 bytes of data.

However, there are sneaky ways around that if you have at least SOME clue as to the nature of your input. If you know that you have at least three categories of fields, you can break the table up into three one-to-one tables, knowing that you can never write a query that holds everything at once, since the 255-fields and 2048-bytes rules apply to query recordsets, too.

We had a similar problem of data storage with our U.S. Naval Reserve database on personnel. We used several 1/1 cases for "Additional Info" - some related to officers, some related to enlisted, some related to "Ready Reserve" status, some related to mobilization eligibility status, etc., and we just used focused queries to join the base personnel info with the parts that applied to that person for the particular process. In each case, it was a legit 1/1 and was done to avoid certain limitations in the API we were using.

An alternative method would be to use an insert into, update SQL statement. I'm guessing this would blast all of the data straight into the row in one single operation and might avoid triggering the error message.

Probably not, unless you are removing a huge amount of data when you filter out the field names. What is your expectation for the actual number of fields and actual amount of "real" data after parsing?
 
Code:
stUPD = "INSERT INTO MyTable (field1, field2, etc." 'NO PARENTHESIS AT THE END!
stUPD = stUPD & "," & NewFieldName"

I hope the choice of variable name wasn't a subtle dig at the whole point of this exercise! :)
 
stUPD is just a variable string for updating.

But I see your point.
 
I hope the choice of variable name wasn't a subtle dig at the whole point of this exercise! :)

I'm not 100% sure if it's a stupid exercise or not yet! It started out as a bit of fun but the fun has definitely gone out of it.

I am finding out some interesting and useful things, like, did you know you can read a comma separated value file with ADO for instance? "Much ADO About Text Files" https://msdn.microsoft.com/en-us/library/ms974559.aspx I didn't, until I happened upon some forum discussions and then found that Microsoft access webpage which is over 10 years old by the way.

I couldn't get it working at first. The first mistake I made was putting the whole URL in, including the file name. I corrected that, but it still wouldn't work. Did a bit of research and found that it was not happy with two dots in the filename.

I didn't have two dots in the file name, but I did have spaces and a hyphen. I removed those and it worked! In other words it appears to be old technology, so you need to treat it like old technology I reckon. Anyway might be a good tip for any other old stuff you happen upon...
 

Users who are viewing this thread

Back
Top Bottom