In an age marked by the rise of generative AI and a cascade of disruptive technologies, one could be forgiven for believing that our essence, the core of human nature, might be overshadowed or even rendered obsolete. Machines that compose, design, and even ‘think’ – they challenge our age-old conceptions of creativity, intellect, and identity. Yet, it’s precisely in these technologically saturated times that studying human nature assumes unparalleled significance.
As the boundaries blur between human and machine, understanding what inherently makes us human becomes our compass, our grounding force. It’s not merely about differentiating ourselves from our digital counterparts, but about establishing a deeper understanding of our desires, ethics, and motivations in the context of this rapidly changing landscape.
Generative AI, with its seemingly infinite potentials, acts as a mirror, reflecting back not just what it can generate, but what we feed into it, what we prioritize, what we value. It’s a mirror of our biases, our aspirations, our fears. As technology becomes an extension of our own cognition, the need to introspect and understand our nature turns critical. Do we wish to amplify our current state, with all its flaws and brilliance? Or do we aspire to evolve, to better ourselves, informed by a deeper understanding of our essence?
In this dance of code and consciousness, of algorithms and aspirations, anchoring ourselves in the study of human nature is not just important, it’s imperative. It ensures that as we shape the future, we do so with wisdom, empathy, and a profound respect for the sanctity of the human experience.
Hi @dr_defalco
I agree that the advent of generative AI provides a prime opportunity to think more about human nature. I’ve been somewhat following the discussions surrounding whether such AIs are conscious, can think, or understand language. Personally, I think the answer is no to each of these, however, I am not nearly so confident of those conclusions as many seem to be. My hesitancy in each case stems from our lack of knowledge of how humans are conscious, can think, and can understand language.
For consciousness, this goes back to what David Chalmers has called the “hard problem” – explaining how consciousness arises from physical states. If we do not know how our own brain states yield consciousness, I do not believe we can rule out consciousness in other entities, even those whose physical structure is quite different than ours. After all, many wildly different physical processes yield heat, so why should consciousness be restricted to a single one?
For language, I think the Stochastic Parrots group has done an excellent job of arguing that generative AI proceeds largely by making statistical projections from bits of language. However, I don’t think they have been as successful in arguing that we do not. Certainly, we do not experience language in that way, but neither do we experience our thoughts as electrical patterns in a complicated organ of mostly fat. Nor do we experience emotion as a rush of hormones impacting our autonomous nervous system. I don’t know of anything that rules out our experience of understanding language being a conscious experience based on stochastic projections of syntax. Even if that could be ruled out, there would have to be a further argument that understanding language requires a particular kind of conscious experience. I think Searle tried to make that argument in his Chinese Room argument, but I don’t think he was wholly successful.
@jim-hardy Searle did indeed make this argument. I co-wrote a paper about considerations with regard to ethical paradigms we should consider that may be beneficial in the design of AI-enabled systems when speculating about the possibility of intentionality and consciousness of AI: “The Renovated 中文 Room: Ethical Implications of Intentional AI in Learning Technology” see page 81 at ttutoring.org/attachments/download/4104/giftsym9_proceedings_FINAL_1.0.pdf
— but I’ll upload the doc here as well.
It’s my understanding that most AI draws from a predetermined set of sources — and not from “everything that’s out there.” And, of course, even if it did draw from “everything,” it would still draw only from those things that are currently accessible online. Much of humanity, however, is not online in any meaningful way — and an awful lot is hidden behind pay walls (particularly research papers!). It seems to me, therefore, that AI is only capable to showing us what a particular set of people have actively chosen to share online. While that may be interesting in itself, I don’t think it’s a true mirror of human nature write large!