Thinking about the what and they why of human nature in the age of AI
I agree that the advent of generative AI provides a prime opportunity to think more about human nature. I’ve been somewhat following the discussions surrounding whether such AIs are conscious, can think, or understand language. Personally, I think the answer is no to each of these, however, I am not nearly so confident of those conclusions as many seem to be. My hesitancy in each case stems from our lack of knowledge of how humans are conscious, can think, and can understand language.
For consciousness, this goes back to what David Chalmers has called the “hard problem” – explaining how consciousness arises from physical states. If we do not know how our own brain states yield consciousness, I do not believe we can rule out consciousness in other entities, even those whose physical structure is quite different than ours. After all, many wildly different physical processes yield heat, so why should consciousness be restricted to a single one?
For language, I think the Stochastic Parrots group has done an excellent job of arguing that generative AI proceeds largely by making statistical projections from bits of language. However, I don’t think they have been as successful in arguing that we do not. Certainly, we do not experience language in that way, but neither do we experience our thoughts as electrical patterns in a complicated organ of mostly fat. Nor do we experience emotion as a rush of hormones impacting our autonomous nervous system. I don’t know of anything that rules out our experience of understanding language being a conscious experience based on stochastic projections of syntax. Even if that could be ruled out, there would have to be a further argument that understanding language requires a particular kind of conscious experience. I think Searle tried to make that argument in his Chinese Room argument, but I don’t think he was wholly successful.
@jim-hardy Searle did indeed make this argument. I co-wrote a paper about considerations with regard to ethical paradigms we should consider that may be beneficial in the design of AI-enabled systems when speculating about the possibility of intentionality and consciousness of AI: “The Renovated 中文 Room: Ethical Implications of Intentional AI in Learning Technology” see page 81 at ttutoring.org/attachments/download/4104/giftsym9_proceedings_FINAL_1.0.pdf
— but I’ll upload the doc here as well.
It’s my understanding that most AI draws from a predetermined set of sources — and not from “everything that’s out there.” And, of course, even if it did draw from “everything,” it would still draw only from those things that are currently accessible online. Much of humanity, however, is not online in any meaningful way — and an awful lot is hidden behind pay walls (particularly research papers!). It seems to me, therefore, that AI is only capable to showing us what a particular set of people have actively chosen to share online. While that may be interesting in itself, I don’t think it’s a true mirror of human nature write large!