Diogo - if you want to know how big a stored query is (not counting the query plan), open the query in SQL view. You are looking at the query. That's it. The query plan is actually very simple. We have some articles by Pat Hartman about query plans and I don't recall that they are big, bulky things either. I'd be surprised if a query and its plan together exceeded 1K bytes.
Queries, unless they are open, have NOTHING but text and settings stored with them (and if you prefer "properties" to "settings" you would not be wrong.) When a query is open, there is potentially some type of memory structure, a list of the records condoned by the WHERE clause. But as to dynamically created SQL? That works just fine, and in fact unless you can build a parameter query for what you wanted to do as a dynamic query, sometimes the only way to get the desired effect at run time is to build the query on the fly.
The speed issue of stored vs. dynamic is open to discussion, but the truth is that Colin has done a lot of speed testing. If he says the difference is quite small for many queries that he has tested, then I would have to say that it probably isn't much to worry about. I think the biggest difference comes about if the query is run via DoCmd.RunSQL with a query containing substitution, when compared to a pre-constructed SQL string run via some equivalent of CurrentDB.Execute to run the query.
If you think about it, the topic we call "cascading combo boxes" is all about dynamic queries that are used to reload the .RowSource properties of the second and later combo boxes in a cascading sequence, because that .RowSource is just a query.
You asked about file bloat. That won't be caused by having or not having a bunch of stored queries. It will be caused by not merely having but RUNNING a lot of action queries that include some DELETE or UPDATE queries. Bloat occurs when you have dead space in your DB. Dead space occurs when you delete records. Big dead space occurs when you delete records in bulk.
Besides the obvious DELETE queries, you can have another source of deleted records. I believe that an UPDATE involving one or more string fields will not update the record in-place. Instead, it creates a new updated copy of the original record, then threads the new record into the table, then deletes the old record. So a bulk update would result in having deleted as many records as it updated. And therein originates the bloat.
Access cannot get rid of the bloat without a Compact & Repair because garbage collection algorithms (that would reclaim the space) have theoretical and practical limitations. I've seen it on IBM mainframes, DEC mainframes, HP mainframes, and PCs. I'm told that it affects Windows, UNIX, MAC/OS, OpenVMS, and all of the myriad of small UNIX variants. Some of those I know well enough to agree; for the others, I believe the problem exists as well.