Martha Argerich on the Moon: Consciousness, AI and Music
Observing consciousness in AI and humans through science and meditation
Last month's issue ended with the claim that meditation can reveal to us aspects of our common human-ness, perhaps human consciousness itself. How so? Let me attempt illustrating with another meditation story.
It was 10-15 minutes into a silent sitting at a meditation hall, and most of us including me had our eyes closed. Latecomers were still filtering in and taking their seats. As this was New York, the seats were tightly spaced and so the unpunctual were quite obtrusive while taking their seats. Especially since the wooden planks of the floor were old and creaked when walked on by them.
While I had my eyes closed, I heard every latecomer entering the hall. I observed myself mentally waiting for them to sit down—so that I can get back to experiencing silence. This happened again and again. Until in the middle of one such entrance, the 'waiting' part of my experience suddenly vanished... and only sound and vibration remained, without any emotional valence. With that release, I started noticing more of my surroundings.
There were sounds of breathing and minor movements, coming from those already seated; the movement of my own chest as it expanded and shrank with my breaths; the feel of the air traveling through my throat. These didn't impinge upon my attention—they were just there as part of my awareness. The words I'm using here to explain the experience are after-the-fact; the experience itself was remarkably language-free. I silently chuckled to myself, feeling perfectly at ease.
That was one of my earliest experiences of satori. It was about 10 years ago, but I vividly remember it. There's probably no canonical way to make sense of satori. My main takeaway was its impersonality. Because my experiences of it clearly did not depend on my personal history, education, or even body. This gave me the conviction that satori had to basically be the same for everyone else too. I felt I'd realized an important truth—that I share something common with every conscious person.
While we might hope to understand human consciousness by means of introspection like meditation, we likely need another approach to understand AI consciousness.
AI and the Science of Consciousness
The recent 88-page paper entitled Consciousness in Artificial Intelligence: Insights from the Science of Consciousness caught my attention because it has Yoshua Bengio as one of its co-authors. Bengio is one of three acclaimed AI researchers of his generation, researchers who in the 1990s and 2000s pioneered the deep learning approach to training artificial neural networks1. I saw him in action at a machine-learning conference in NYC I attended a few years back, and he impressed. So I read the paper with anticipation.
The paper mostly limits its study to what it calls "phenomenal consciousness". A system has "phenomenal consciousness" when there is “something it is like” for the system to be the subject of a experience2. It mentions in passing other kinds of consciousness, such as "access consciousness" and valenced or affective conscious experiences. We can go into those concepts here, but let me cut to the chase: none of these relate to what we might call self-consciousness, the kind of consciousness that humans have and animals do not3, and the kind that is most mysterious.
I asked Copula AI (the document Q&A system my company built) what if anything the paper has to say about self-consciousness:
I typed: How does the text explain self-consciousness, if at all?
Copula AI: The text does not explicitly discuss self-consciousness. However, it does explore various aspects of consciousness and the potential for artificial intelligence (AI) systems to possess consciousness. It discusses indicators of consciousness, such as agency and embodiment, and explores the integration of conscious experience over time. The text also mentions the possibility of AI systems having subjective experiences that may be different from human experiences. While self-consciousness is not directly addressed, the text provides a framework for understanding consciousness in AI systems and raises important questions about the nature of consciousness and its potential manifestations.
While reporting that the paper does not discuss self-consciousness, Copula AI writes something like a decent summary of the paper, a summary with a focus on the topic I was curious about. It suggests that I might be interested in the paper's discussion of agency.
Agency is one of the important co-occurring features of self-consciousness, features that include first-person privilege and a public language4. So I searched with interest the section of the paper that mentions agency5, and ... was disappointed. Agency is defined by the paper as perceiving and acting upon the environment to pursue goals, learning on the way "to produce goal-conducive outputs". That is too mechanistic a take—I submit that most people take agency to mean forming goals, your own goals; not pursuing goals others have set for you.
Consciousness as Computation vs. Consciousness as Behavior
Mechanistic is the term that keeps coming to mind as I read more of the paper. See for example Table 2, with its 14 "indicator property entailments". The paper arrives at these as a checklist for consciousness. It looks like a veritable pre-flight- takeoff mechanical checklist. Here are two of them to give you their flavor:
Input modules using algorithmic recurrence
Metacognitive monitoring distinguishing reliable perceptual representations from noise
To be fair, the authors are upfront that their approach is a computational one6. They deliberately chose it over a behavioral approach, where you test an AI system by evaluating its responses rather than its inner workings. In a behavioral test, we test the AI system's responses for human-ness. The authors assert that a behavioral test can be "gamed" by AI systems that are trained to mimic human replies. While replying like humans, these AI systems may not have the corresponding "felt experiences" that they ought to were they truly conscious7.
I think we have no choice but to use an AI system's behavior to assess its level of consciousness. Consider the alternative. Suppose the papers authors' apply their computational criteria to a new latest-and-greatest AI system, perhaps ChatGPT-5. Suppose they find that it meets all 14 of their computational criteria. Would you be convinced that ChatGPT-5 is therefore conscious? No. You will want to find out for yourself by interacting with it.
We constantly assess people we meet and interact with. We quickly intuit features like their level of maturity, language skill, and general IQ. We do so without relying on plain visual appearance. And we certainly do so without measuring their brains' neurons firing. Why not adopt the same approach when judging AI? Doesn't a purely computational approach assess only the delivery mechanism of consciousness, rather than the presence of consciousness itself?
Consciousness and ... Music?
Classical music fans go to great lengths to attend a Martha Argerich concert (pictured above). I once almost flew to Turin, Italy to hear her play a Prokofiev Piano Concerto. It is clear that at such an event music is being made. But where exactly is the music coming from, and how do we decide that it is indeed music, rather than noise?
The piano is part of a delivery mechanism (that includes human fingers) for tuned sound. We can examine the piano and ascertain that it is in good order. Perhaps we can even observe its inner workings as it's being played. Depending on the sophistication and patterns we find, we can decide on the evidence of music being made. That would be the computational approach, akin to the paper's.
Or we can just listen, and judge for ourselves if there's music8—akin to the behavioral approach for testing AI consciousness.
How about the question of where exactly the music resides? It's not in the air, as that would just be sound. It's not in the musical manuscript, as that would just be the composition. How about in the performance? Let's do a gedanken experiment.
Imagine Martha Argerich on the moon playing a piano, wearing a spacesuit. She's performing, but no one can hear her9, not even she herself. Clearly, there's no music. Which suggests: music is in the hearing. And not just any hearing, but hearing by humans, humans who reason and feel to make meaning out of the sound reaching their ears.
So it's no overstatement to say that music is of the people, by the people, for the people—a relationship-driven feature of humanity. Just like language. And perhaps, consciousness.
This approach made possible the training of very large AI systems, including the state-of-the-art ones today like GPT-4.
See section 1.1 of the paper.
Generally speaking. I understand some species share some human traits.
See my earlier substack essay, Even AIs Need Community.
See section 2.4.5(a).
They assume "computational functionalism"—computations of a certain kind are necessary and sufficient for consciousness—and, roughly speaking, tabulate the kind of computations that they can then look for in an AI system. See section 1.2.3
The authors refer to the argument made in Andrews, K., & Birch, J., 2023. To understand AI sentience, first understand it in animals. Aeon Essays. Aeon.
Now there is room for disagreement in this judgment, and it is instructive for the case of AI consciousness. For what some might consider music, others may find unintelligible. We may similarly find AI consciousness unintelligible. The computational approach could help here, though it has its own isssues relating to different kinds of computing paradigms.
In space, no one can hear you scream.