Descartes used the proposition "I think therefore I am" as the foundation of his philosophy on self-consciousness, expanding upon it to include other mental states such as pain and desire1. This Cartesian theory of mind has since been debunked by modern philosophers. Leading the charge have been
1. Hegel, who argued that there is no self without an other;
2. Wittgenstein, with his claim that intelligible private thoughts are impossible without a public language; and
3. Strawson, with his suggestion that self-description requires general adjectives that apply to other persons too.
Would these arguments apply to AI? For an AGI2 to arise, just like for a human person, might it need a community to arise in?
The Machine Version of Descartes "I Think Therefore I Am"
Computers, especially when running AI algorithms, are sometimes referred to as thinking machines. When an AI system is fed some input, significant computational activity commences, flowing through a computation graph commonly dubbed an artificial neural network. We can analogize this sequence to a human perceiving some stimulus (input), thinking about it (computation), and deciding on a course of action (output).
If self-consciousness is tied to thinking, we might provisionally chalk this computational activity up to some level of sentience. There is a fair amount of structure to this computational activity. Artificial neural networks have hidden nodes that activate differently depending on the inputs, holding states of varying novelty. They produce outputs whose desirability is measured by mathematical metrics integral to how they've been trained. So one can construct a machine-version of Descartes' scheme, expanding upon the fact of computational activity towards more structure, and call the result the machine version of sentience.
But we intuit that something is amiss. How can pure thinking, however structured, result in a self-conscious entity? Researchers believe that AI systems must have some understanding of the world, the physical reality outside of their computation, before reaching AGI status. The philosopher Hegel also says that self-consciousness cannot develop on its own accord. But for him, the missing ingredient is not simply a general understanding of the world; it is something drastically more interactive.
Hegel—There's No Self Without an Other
"Self-consciousness achieves its satisfaction only in another self-consciousness… A self-consciousness exists for a self-consciousness. Only so is it in fact self-consciousness, for only in this way does the unity of itself in its otherness becomes explicit for it."3
For Hegel, self-hood resides in reciprocal recognition with an other. The freedom we wield as self-conscious persons is forged in the crucible of relationship. In choosing an independent course of action, accounting for but not dictated by circumstance, we need to form intentions and have purpose. The latter can come about only after observing and practicing with other intentional beings.
We ask and receive explanations for others' actions (Why are you late?). We get asked about our plans, and why we ended up not carrying them through (What are you doing for Christmas? Why did you not visit your parents for Christmas like you said you would?). These interactions teach us what intentions are, and when and when not they are followed through. The self-conscious being, who takes free action along with the responsibility for that action, arises out of these interactions with other self-conscious beings.
And so, the individual and relationship are a dialectic. There is no relationship without individuals relating to each other. Conversely, there are no individuals without relationship to spark and sustain their existence. If Hegel is right, then AIs need other entities to relate with as a precondition for gaining sentience. Such entities can be humans, or potentially other AIs.
As it is, there is interaction between human engineers and AI systems. Humans present inputs to the AI systems, which then produce outputs. Sometimes, the humans will be unhappy with the outputs and tinker with the AI systems to improve them. For example, we might give an AI system some video footage and ask it to count the number of people in it. If we find that the system often undercounts when the footage is dark, we might go and tweak the AI system to fix this bias. So we can say that AI systems learn after interacting with humans.
Of course, such interactions are hardly enough to base a relationship upon. Some breakthroughs are needed here. Among which might be a public language that AIs learn from and use in such interactions. Enter Wittgenstein.
Wittgenstein—'First-Person Privilege' Requires a Public Language
When we speak in the first person, to others or to ourselves silently in thought, we avail ourselves of a certain privilege we typically take for granted. For example, when we say or think "I am hungry", there's no doubt associated with it. We just know. If this doesn't seem convincing, consider the more acute case of having a bad headache, or having scraped a knee, and reporting "My head/knee hurts". These statements exhibit what philosophers call "first-person privilege". Wittgenstein says
"it makes sense to say about other people that they doubt whether I am in pain; but not to say it about myself."4
Since we humans all have this 'first-person privilege', we might suppose that an AGI would have it too. So it makes sense to dig into what's required for first-person privilege.
And that turns out to be a public language, an ascription of meaning to words that is shared and enforced within a community. According to Wittgenstein, the privilege of being certain of our sensations comes from a 'grammar' that governs sensation words—words like 'pain', 'warm', and 'hungry'. If we ever use these words incorrectly, it's because we haven't learnt them properly. And to learn them, we need to engage with a community that expresses their behavior through them.
We can see this as a fine-grained version of the argument in the previous section, which posited relationship—abetted by a common language–as a requirement for sentience. Here, we make an even stronger case for the requirement of a community for the individual to arise. Because even referring to private sensations requires a public language. We cannot even say to ourselves 'I think' without learning what 'I' and 'think' mean by playing a 'language game' within a community.
You may object to this line of reasoning by arguing that you can invent a private language, defining your own terms, and use this language only you know to refer to your private mental states. Wittgenstein anticipates this and says that such a private language has no criterion for correctness, and so cannot imbue you with first-person privilege—there's no one to keep you honest, and no external landmarks to help you be consistent in your (private) language use5.
To make matters even more difficult for a would-be AGI to arise independently of community, it must contend with yet another language-based argument due Strawson.
Strawson—Self-Identification Needs Acknowledging Other Individuals
In his book Individuals, Strawson develops the principle that a person can self-describe their own experience only if they are prepared to apply the same description to some other person's behavior they may encounter6. Strawson calls our attention to a category of phrases which describe states of consciousness. Some examples of these, which he calls P-predicates: "in pain", "feeling joy", "am thinking".
There are feelings associated with these predicates that we can feel directly when they apply to us. There is also behavior associated with them that we can observe in others when they apply to them. Strawson claims that these P-predicates, around which identity is built, need to be learnt in both forms–how to self-ascribe them, and how to other-ascribe them.
“It is not that these predicates have two kinds of meaning. Rather, it is essential to the single kind of meaning that they do have, that both ways of ascribing them should be perfectly in order”.
In the Cartesian theory, in the "I think therefore I am" vein, we refer to and learn from just our own experience, using that as the foundation of knowledge. Strawson says that if we learn words by applying them to the one instance of ourself, not knowing how to apply them to other instances, such words become labels. More like a name rather than a predicate. Do we really know what 'red' means if there's only one item in our entire experience we describe as 'red'?
Therefore, an AGI that can talk intelligently and meaningfully about itself needs to learn how the predicates it might use apply to others too. It can't sincerely announce "I am sad" without knowing how to apply "sad" correctly to other intelligent beings behaving sadly.
Conclusion
Most of us researchers working towards ever better, more sophisticated AI systems typically pay attention to where the AI systems underperform humans in their tasks. For example, where an autonomous driving AI gets confused (perhaps when there’s a bystander pointing at a pothole to avoid), or where an AI speech recognition engine makes many mistakes (perhaps on a comedy skit using made-up words). We implicitly believe that by figuring out how to tweak the systems so they handle these situations brings us closer and closer to AGI. But in this essay, I present hurdles of a different kind that is more fundamental.
For us to recognize an AI system as human-equivalent sentient, it needs to meet the same requirements we humans do. It needs to observe other sentient beings' (e.g. us humans') behavior and learn the language that goes along with it, and connect that language to its own states so that it can gain self-awarenes and self-description. See sections 3 and 4. Along the way, it needs to be in some sort of relationship with us (again, us as there is no other sentience presently), so that it learns how to explain itself to us, query our intentions and form its own intentions, thereby gaining self-authorship (a.k.a. freedom). See section 2. In brief, the AI needs to commune with us.
Descartes. Meditations II. 1641.
Artificial General Intelligence.
Hegel. The Phenomenology of Spirit. trans. Miller. 1807.
Wittgenstein. Philsophical Investigations. 1953.
'if I assume the abrogation of the normal language game with the expression of sensation, I need a criterion of identity for the sensation; and then the possibility of error exists.' (ibid, Section 228)
Strawson. Individuals, Ch 3. 1959.