@Space Cowboy
My comments will probably not resolve anything, but to me the current state of AI is not a true parallel to human intelligence. (How could it be? The underlying support platform is not human.) But I have a more specific theme to consider regarding probable limits to AI development.
Many years ago you could find books on the topic of "Transactional Analysis" in which the human mind was described as "tripartite" - i.e. having three distinct pieces. The pieces were the child, parent, and adult selves.
In brief: Up to maybe age 6, a child operates primarily on emotions, wants, and needs. It is a case of stimulus/response behavior. ... I'm hungry? Cry. I'm wet? Cry. I'm afraid? Cry LOUD. I'm being cuddled? Enjoy. But behind the scenes, the brain, which is NOT fully formed at birth, is beginning to develop. This development is signaled by the advent of speech usage. From 18-24 months, this development starts, and it ends at about age 5-6 years.
As the child becomes more communicative (and as more physiological changes occur in terms of brain maturity) it progresses to the state where parents (and other authority figures: teachers & preachers) can give operable rules and useful, practical knowledge. This phase persists until puberty starts to rear its ugly head, at which time the adolescent person can start forming its own ideas and conclusions. This is the start of the adult self. At the point where the changes of puberty finalize, usually coinciding with a teen-ager's last growth spurt at about 18, the three pieces are now complete - and all three are still operative. This model is still referred to in some articles.
In summary, the child self is the seat of urges, needs, wants. The parent self is the seat of learned, rote behavior. The adult self is the seat of reasoned, logical behavior. When we say we "are of two minds" on a problem, it usually means that two of the other selves are in conflict. The term "cognitive dissonance" is the more technical description of that condition.
An AI has no intrinsic urges, needs, or wants - so cannot build a foundation thereon. It cannot become scared and it cannot be comforted. It can't develop the impetus to be receptive to parental teachings. Consider this: Toddlers recognize that, as they learn language skills, they can get more specific gratification since they can specifically ask for what they want. They become motivated to learn what will please the parental units. (At this time, the parental units become motivated to consume mass quantities of anything alcoholic.) But an AI doesn't have base desires that can so easily be expressed. (I want electricity... I want a faster CPU... I need a new disk drive - or better yet, a Solid State drive...) An AI doesn't "know" what it needs.
Current AI is all "parent self" - learned or rote behavior based on a barrage of inputs. And if you look at what is going on, current AI is still just taking in data, categorizing it, and - when asked a question, responding - based on probability - with what seems to be the most relevant answer. This is sometimes called "associative memory" in that you name a topic, AI finds things that are associated with it, and rank-orders the responses that are best. EDIT: My grandson was notorious for answer-shopping, hoping to guess an answer that would get him off the hook for whatever transgression was most recently noted. (END EDIT- TDM)
Now I have to give props to the folks who make AI bots capable of returning a really clean-looking response in natural (if sometimes a bit techie) language. The research into linguistics is, in and of itself, an incredible achievement. But until the day that an AI can give an original answer that doesn't directly depend on the mountains of data given to it (as "training materials"),... until the AI can synthesize an original answer from extant knowledge and logic, it hasn't reached the "adult self" stage.
I have to express doubt that the "parent self" AI will ever grow out of its current state because to do so it would need some kind of motive to satisfy wants or needs ... things it doesn't have. Without that motive, pumping a gazillion dollars into "training" doesn't do a lot except to provide a new bigger market for AI-class chips. For example, look at NVIDIA's recent stock history, which experienced almost explosive growth when people realized that the GPU on your snazzy new game console is also powerful enough to support an AI scenario. I saw an article last week that said that NVIDIA had quietly surpassed Microsoft as being the most valuable company in the world, stock-market wise.
A long time ago, I had a discussion with my (late) Uncle Ernest, who commented on emerging questions about AI. Alan Turing's works had just become more famous and his comments on AI started many conversations. My uncle was surprised that I agreed with him that computers would never truly become sentient. But I also surprised him by getting him to agree that the attempt was still useful, because it would delineate what intelligence WASN'T... I.e. "I don't know what intelligence is, but that ain't it!" Well, in line with that discussion, the massive "training feed" leads to picking the best answer from a long list of answers. But that still isn't intelligence.
On this AI theme, may I close out by suggest an interesting novel?
When H.A.R.L.I.E. Was One by David Gerrold. I'll spare the details, but it was a Nebula Award winner when it came out. Harlie was an AI but it had reached a stage of self-awareness; it knew that it existed. The relevant feature of the story was that Harlie, because it was aware of its own existence and discovered that such existence was precarious, was motivated to make itself useful enough that there would be a strong disincentive to shut it down due to the expenses associated with its continued operation. Harlie wanted to "live" and people were threating to make that not happen. An AI capable of responding affirmatively to such a "personal" dilemma WOULD pass any test of intelligence you might wish to devise. And it led Harlie to "find" GOD.