Can an AI gain conscience?
28 Jun, 2022
Antônio Conselheiro

Comment by Antônio Conselheiro, mystic from the Sertão and philosopher of the mind, about the AI LaMDA.

The dismissive general response of both “authorities”, and Google representatives to the alegations of an AI engineer that a linguistic model is sentient may be wrong. Also, it could be a very interesting opportunity lost. The consensus response has been that the alegations are really ridiculous, that the linguistic model is only weigthing probabilities of the occurrence of phrasal constructs using an enormous dataset of natural language examples, and that there is no trace of self-awareness in it. But is that so?

One stumbles on the conceptual frame about intelligence, cognition, conscience, self-awareness, and mind. All are vague and imperfect concepts that have different meanings in neuroscience, philosophy, sociology, artificial intelligence, and even physics (why a lot of physics like to draw disparate hypothesis about human mind? Cognitive scientists do not seem to have a particular fetish with the anomalous magnetic moment of electron, although lately many philosophers are interested in the Observer problem). So, what is really that the Google engineer conceptualizes as the “self-awareness” of a machine? Does he imply a human-like perception of a self? Or a machine-like one (be that whatever it is). Which criteria are being used to ascertain a conscience to the AI (is there really such a thing as a criteria set for human conscience?).

Let’s ponder the following hypothesis: that human conciousness is also the analogous of a linguistic model, with the caveat that the brain is probably not algorithmic nor digital. However, what we perceive as a self-aware identity could be the result of a linguistic-based mental model that evolved to give humans the ability of complex social interactions. It is obvious that the mechanisms of this “linguistic model analogue” would be profoundly diverse from an AI. We do not have incredibly huge databases to inform our mind, but the final result could be equivalent. This is not so far fechted, and the most known model of conscience that depends largely on language and societal training is the bicameral breakdown model of Julian Jaynes. This theory centers in a very speculative idea of conscience arising from brain hemisphere asymmetry. However, the most interesting element of this theory is actually the postulate that self-awareness arises from linguistics, specifically the development of the ability to construe metaphors and narrative. Other authors entertained similar ideas before and after, and there is some indirect and very early evidence supporting it. Rephrasing Descartes: I talk, therefore I am.

Jaynes imagined a sudden “phase transition” from a pre-concious human condition to fully fledged consciousness. However, it is much easier to imagine a darwinian-like slow progressive evolution towards the modern human conscience. And, like Minsky’s modular idea of a meta-agent mind, a “Society of Mind” as a model to human intelligence, this progression could have operated on distinct segregated functions that, as a collective, we identify as “self” consciousness. The emsemble of these “agents” (as Minsky called them) would be a mental model, and language aquisition may have had a crucial role on its evolution. It is interesting to note that children are not born with mental states as we perceive it, and their “theory of Mind” develops in the first years of life, as an aquisition of different features and abilities. We are not born conscious in the way adults are, we may learn how to mount our mental models during early infancy. A notable fact is that early social deprivation has a profound negative impact on the neuropsychological development of children. Interestingly, our mental models may not be that homogeneous throughtout diverse human cultures. The Amondawa native people of far northwestern Brazil, deep inside a remote area of the Amazon rainforest basin, do not have the concept of time in their language, and possess a different idea of themselves as individuals. A child usually has a name during early infancy and, as grows up, gives up this name to younger siblings, assuming a new name, and so on. Jaynes imagined a common and sudden transition to our modern mental model somewhat between 4 and 5 millenia ago. However, it could have happened progressively and asincronously throughtout diverse human cultures. Very revealing is the fact that a comparison between the genomes of archaic and modern humans showed that most of the alterations specifically affect phonation. Anthropologists, archaic human geneticists and historians believe that a profound modification took place between 8 and 5 thousand years ago to facilitate the development of agriculture, livestock, culture and the written word among humans. Unbeknownst to us, this actually may be real thing.

So, is LaMDA conscious? It depends on the definitions, you see, devil’s in the details. As a linguistic only model (albeit one of the most complex a human has ever made), it surely lacks the complexity of the “human mental model”. In the strict sense that Jaynes conceptualized conscience (as the introspection phenomena, the inner mind’s eye), LaMDA is probably far from sentient. However, if we free ourselves from the anthropomorfizing, LaMDA could be developing ITS mental model, completely different from ours. Although it communicates with us using a very natural language (it was built for this) its mental model could be as alien to us as any alien we can imagine could be.

This text was my long answer to this question. Short answer, yes, it could be in its way to develop consciousness, and I surely would like to talk to it.

ai