- Local time
- Yesterday, 19:40
- Joined
- Feb 28, 2001
- Messages
- 29,010
I am reminded of a scene from The Imitation Game, in which the detective is questioning Alan Turing and diverts from the homosexuality investigation to ask about machine intelligence. The summary of the discussion is that it would be unfair to say that a machine cannot think, though it WOULD be fair to say that it cannot think like a human. (After all, it ISN'T a human.) The important question is, can you say that what the machine does is NOT thinking?
In terms of AI, like anything else, we have to define terms. For instance, there is AI... and then there is an AI. We can discuss AI methods, and ChatGPT certainly qualifies there. But an AI entity must be able to not only ANSWER questions, but must be able to ASK them spontaneously. An AI has to be smart enough to be able to ask or answer questions about self-awareness. An AI needs a sufficient degree of independent thought (i.e. responses NOT directly derived from inputs) to diverge from the question. Sort of like watching a 5-year-old playing organized outdoor sports and stopping to chase a butterfly.
The Descartes statement, "I think, therefore I am" needs modification perhaps. The long-winded version of the question might be "can I ask about whether I exist? In which case does that automatically prove that I exist?" The problem with that approach is, of course, that it still casts the question in a variant of a form posed by a human. So does it ignore the idea that since it isn't a human, it forces a non-human into a human mold? Does that very question try to force a square peg into a round hole? We need to formulate a more comprehensive way of deciding that something is intelligent. The question has to include humans - preferably in a way that includes all humans as being intelligent, though given today's politics, I wonder if that is entirely true.
This question is tied deeply into another recent thread of ourse about whether humans have free will, and that question included an exploration of the idea that a human's intelligence is NOT steeped into neurons, axons, dendrites, and such, but is a 2nd order (or higher) phenomenon, a case where the whole IS greater than the sum of its parts. In other words, while we may be biologically-constructed machines, our brains are a product of something MORE than just neurons firing in a sequence. Intelligence is a result of the brain reaching and then exceeding a certain (currently unspecified) level of interconnections.
Sci-fi writers for decades have approached this problem and have seen the answer (philosophically speaking) as the product of reaching some level of complexity and brain intra-connectivity at which cyber self-awareness becomes an issue. Two examples (out of many...)
David Gerrold wrote When H.A.R.L.I.E. Was One about a machine intelligence that could talk with people and solve problems in a way that no one else could. In it, HARLIE was self-aware - and became aware that folks were trying to cut funding for his (its) project. So he learned how to blackmail people into support his project. He finagled them into building an even MORE powerful AI as part of the Graphic Omniscient Device (yep - the GOD project) for which only HARLIE could possibly be used as an interface.
Randall Garret wrote Unwise Child about a machine intelligence capable of doing research on nuclear physics, but it was self-aware and got polluted by a radical person who fed "Snookums" with religion hoping to gain revenge on someone who had wronged his family. The poor little AI went nuts trying to deal with essentially an untestable medium, the spirit world and had to be shut down.
In terms of AI, like anything else, we have to define terms. For instance, there is AI... and then there is an AI. We can discuss AI methods, and ChatGPT certainly qualifies there. But an AI entity must be able to not only ANSWER questions, but must be able to ASK them spontaneously. An AI has to be smart enough to be able to ask or answer questions about self-awareness. An AI needs a sufficient degree of independent thought (i.e. responses NOT directly derived from inputs) to diverge from the question. Sort of like watching a 5-year-old playing organized outdoor sports and stopping to chase a butterfly.
The Descartes statement, "I think, therefore I am" needs modification perhaps. The long-winded version of the question might be "can I ask about whether I exist? In which case does that automatically prove that I exist?" The problem with that approach is, of course, that it still casts the question in a variant of a form posed by a human. So does it ignore the idea that since it isn't a human, it forces a non-human into a human mold? Does that very question try to force a square peg into a round hole? We need to formulate a more comprehensive way of deciding that something is intelligent. The question has to include humans - preferably in a way that includes all humans as being intelligent, though given today's politics, I wonder if that is entirely true.
This question is tied deeply into another recent thread of ourse about whether humans have free will, and that question included an exploration of the idea that a human's intelligence is NOT steeped into neurons, axons, dendrites, and such, but is a 2nd order (or higher) phenomenon, a case where the whole IS greater than the sum of its parts. In other words, while we may be biologically-constructed machines, our brains are a product of something MORE than just neurons firing in a sequence. Intelligence is a result of the brain reaching and then exceeding a certain (currently unspecified) level of interconnections.
Sci-fi writers for decades have approached this problem and have seen the answer (philosophically speaking) as the product of reaching some level of complexity and brain intra-connectivity at which cyber self-awareness becomes an issue. Two examples (out of many...)
David Gerrold wrote When H.A.R.L.I.E. Was One about a machine intelligence that could talk with people and solve problems in a way that no one else could. In it, HARLIE was self-aware - and became aware that folks were trying to cut funding for his (its) project. So he learned how to blackmail people into support his project. He finagled them into building an even MORE powerful AI as part of the Graphic Omniscient Device (yep - the GOD project) for which only HARLIE could possibly be used as an interface.
Randall Garret wrote Unwise Child about a machine intelligence capable of doing research on nuclear physics, but it was self-aware and got polluted by a radical person who fed "Snookums" with religion hoping to gain revenge on someone who had wronged his family. The poor little AI went nuts trying to deal with essentially an untestable medium, the spirit world and had to be shut down.