Suppose you chance upon a stray sheet of printout at your workplace's conference room. You pick it up, and find just the words "It's raining here" printed on it. You shrug, not knowing what to make of it, and recycle-bin the paper.
Now imagine instead you're on a call with a buddy who lives a few time zones away. And he says to you "It's raining here." The same phrase, having the same definition, in this case conveys meaning: you understand that it's raining right now where he lives.
You may go Duh, as this distinction between the two cases is obvious. But it might get less so, as we are at the cusp of an explosion in AI generated content.
Should AI content be likened to the first case, speech that was caused by the environment somehow; or to the second case, as speech that was intended at you by someone.
Enter ChatGPT
Juicy patty, sizzling hot Toasted bun, crisp lettuce and tomato A symphony of flavors, a feast for the senses Savory, satisfying, a true American classic.
Ian Bogost writing in The Atlantic1 about ChatGPT—a text generating AI bot—reproduced the above poem it wrote about hamburgers. Ian finds ChatGPT’s output 'while fluent and persuasive as text, ... consistently uninteresting as prose'. He says it reminds him of John Warner's critique of high school writing as students trained to produce 'imitations of writing'. Ian moderates his stance later on in his essay, saying that ChatGPT can be fun 'to probe, play with text'.
And maybe that's how content generating AIs ought to be framed, as 'aesthetic instruments'2 for play, rather than agents with expertise. Such is part of his conclusion, and I find myself agreeing with his take. It is also consistent with how we began this current essay—that AI content lacks intent. Instruments have no intentions of themselves. If it is to make music, a saxophone needs a human breathing life into it.
"We don't carry any dead juice in this deli."
When my girlfriend and I, both vegetarians, were shopping for juice at a New York bodega recently, we encountered an irascible storekeeper. When we asked him if they had any bottled vegetable juice, the above quote was his reply. He pointed us to a counter in the back where we could pick fresh fruit and veggies to be juiced on demand.
It's like the difference between a meal cooked for you by a loved one and a microwave TV dinner. Or between a live band playing it's heart out for an enthusiastic fan audience and muzak coming out of cute white Sonos speakers at a kitsch coffeeshop.
The dead juice, the dead food, and the dead music are only weakly, if at all, intended for you.
Second-Order Intention (technical)
Meaning is conveyed in John's utterance "It is raining here" to Mary when:
John intends with his utterance that Mary believe that it is raining where John is by recognizing that such is his intention.
The above is my paraphrase of a passage from the chapter Meaning in Penguin's Modern Philosophy3. That passage concludes as follows:
"Meaning is a matter of second-order intention: of intentionally drawing attention to one's intentions. So says Grice, and many people have agreed with him."
The author of Modern Philosophy, Sir Roger Scruton, is referring to Paul Grice's famous 1957 paper Meaning. Another work Sir Roger later on refers to is John Searle's Speech Acts. These and other philosophers have analysed the various ways we use language: to assert, question, command, warn, promise, request, and more.
Revisiting this work might help us learn to place AI generated 'speech' in the right context. And we've been doing a little bit of that here.
Conclusion
This essay argues that the lack of intentionality is why we find AI content in a free context, such as in a conversation setting or in an art setting, uncannily unsatisfying. Because the context itself does not provide meaning, and we find no there-there in the content either. There are two ways for going about addressing this issue.
The first way is to provide a clear frame for AI content, and use the AI as an instrument—like what Ian Boghost is suggesting. For example, we ask the AI system to translate a Shakespeare sonnet to French in a similar period style; rather than ask it to compose a French sonnet from scratch. Or in a generative art setting, figure out what we want to communicate through the art first, and then give the AI clear instructions to help convey that intent; editing and revising as the work takes shape.
The second way is to give the AI itself intent. And here, we get into AGI4 territory. For an AI system to exhibit intent, that comes across as real and not fake, that system will need to have values and purpose—features we see as belonging solely to self-aware entities living in communities5.
ibid.
Modern Philosophy: An Introduction and Survey. Roger Scruton, 1996.
AGI stands for Artificial General Intelligence: an AI system that can learn and perform any intellectual task that a human being can. See the Wikipedia article.
See my earlier post, Even AIs Need Community.