I have been working with an ENT surgeon to get my ChatGPT-based web service1 to answer questions of the kind that appear in the otolaryngology board exams. After loading up on voluminous medical texts, the service answers clinical questions phenomenally well:
Q: A 6-year old child complains of ear pain. The parent says that the child has been pulling on her ears in recent days. How to proceed with a diagnosis?
A: The complaint of ear pain in a 6-year-old child with recent ear pulling suggests a possible middle ear infection, also known as acute otitis media (AOM). Other symptoms to look for include fever, irritability, and hearing loss. Otoscopy can help confirm the diagnosis by revealing a bulging, opaque tympanic membrane that may be yellow or yellowish-pink in color. However, otoscopy can be challenging in young children due to narrow ear canals and a moving target. Therefore, a thorough medical history and physical examination are also important. Treatment for AOM typically involves antibiotics and pain management. However, watchful waiting may be appropriate in certain cases, particularly if symptoms are mild and the child is over 2 years old.
So, there's an ongoing concern that in the not-too-distant future we won't need nearly as many doctors as we currently do. Lawyers are also feeling the heat: an AI system that knows all the statutes and prior court decisions might figure out how a case would go, and inform potential clients directly.
While it may come to pass that a lot of what medical and legal professionals currently provide gets handled by AI instead, I have a thesis that there will remain a role here that only humans can play2. A role distinct from knowledge retrieval and logical reasoning, that is common to lawyers, doctors, and yes, priests: that of a shepherd.
My (Ignominous?) Day in Court
About 15 years ago, when I was young and stupid, I was in a civil court contesting a traffic ticket I got while driving around Times Square. The ticket was for making a right turn when there was a traffic sign saying that was not allowed. I was pissed at getting the ticket nevertheless because there were clear extenuating circumstances: there was a street fair going on nearby and many streets were blocked; not making that turn would have meant a good half an hour of roundabout driving in stop-and-go traffic. I brought a hand-drawn diagram, that I was quite proud of, to show the judge why it was stupid of the cop to give me a ticket.
The judge, while acknowledging my diagram, didn't take my side. And that's an understatement—she all but barked at me. In her pronouncement upholding the ticket, she spoke matter-of-factly, but her delivery dripped with disdain. I felt totally chastised. The fine I had to pay was small, but I did learn an important lesson—hard to put what exactly in words, but something like: the law doesn't bend for lame excuses.
Now, if this had all happened online, with judgment arriving over email, I'd have simply found the result unfair and sulked. The judge and her affect, perhaps along with the setting of the civil court that included a flag and other accoutrements, helped grow in me a new appreciation of being a citizen in a democracy subject to the rule of law.
And how do lawyers figure in this? They, like Congress, judges and cops, play a key role in supporting the superstructure of a rules-based society, that enables people with conflicting interests to live together in peace and prosperity. They do this not just by intelligently fetching the appropriate rules for a given situation, which AI can possibly do, but by also shepherding their clients through the process of accepting the implications of said rules.
We've all signed legally binding contracts—relating to e.g. employment, apartment rentals, property deeds and the like. I have consulted attorneys on some of the more complex ones, such as one involving incorporating a startup with two other co-founders. Hearing them speak of real cases where the terms were contested, and how the resolutions panned out, made the whole process feel less abstract and more real. It furthered an understanding in me of what I would be committing to if I signed. And when I did sign, I signed with greater resolve than I had started with.
Doctors at the Border
The analogy in medical practice is straightforward. For a patient presenting symptoms, one can imagine a diagnosis and treatment plan to be produced by AI—kind of like an automated parking ticket. But like lawyers getting their clients to grok the implications of the law, human doctors can help patients psychologically transition into the new state-of-affairs brought about by their new condition. Furthermore, they can encourage them to take to the treatment plan by relaying the experiences of other patients who've gone through it. This may not be necessary for common minor ailments such as the cold or a sprained ankle, but consider a case like Frank Bruni's:
Frank Bruni is a well-known New York Times opinion writer. I heard him recount his story in a recent podcast interview3. In 2017, without any warning he woke up not able to see clearly. It turned out he had a stroke while asleep which damaged his right eye's optic nerve. He'd gone blind in that eye! Worse, doctors would tell him that there was a 40% chance his left eye will eventually follow, and that he should no longer fly on planes due to the cabin pressure changes being a risk.
Even if Mr. Bruni had a wonderful, supporting family and network of friends, they would be just as bewildered and at a loss Re: how to make sense of his sudden predicament. Doctors, who have had an indirect but nevertheless meaningful history with the disease through their grappling with it as their patients' agents, can offer support in an authoritative way others, including therapists, can't.
Now we've all had experiences with doctors who aren't very personable (cf. "poor bedside manner" mentioned in online reviews...). They, and (younger?) doctors who feel they signed on for the science of medicine, may protest that their role is to find objective solutions to physiological problems—like a mechanic fixing a faulty machine, except in this case the machine is the human body. Even if we concede that, that the doctor's role is in fixing the body, to do a good job in that role requires contending with the patient's mind. For as the mind is dependent on the body that facilitates it, so is the body dependent on the mind driving it.
And that, I submit, is the most interesting part of a doctor's calling: navigating the border between the body and the mind. How human persons—with their freedom, rights and responsibilities—interface with the brute biology of body and brain. This remains an unsolved and contentious mystery. But while everyone has first-hand experience in knowing what it's like to be a human person, only doctors also intimately know how the human body works.
So they are uniquely placed to shepherd the rest of us in our life-long journey with our embodiment. To the extent they choose to see themselves as just mechanics tinkering with human bodies, their replacement by AI and automation beckons.
The OG Shepherds
Before there was law, before there was medicine, there was religion. Priests of various denominations have for millennia officiated at ceremonies marking birth, coming of age, marriage, and death. These days they can also be found offering their services in prisons, hospitals, and army camps. They could just be enacting ritual, without providing personalized guidance; but even so, they influence proceedings by their affect and seriousness of purpose. A Jewish wedding and a Buddhist funeral I attended come to mind.
At the wedding, a young, well-spoken rabbi—who looked like a peer I could hang out with—married two good friends of mine. My friends would later confide in me that they had a lot of nerves going into the wedding. They both came from broken families and had doubts about the durability of modern marriage. They told me that leading up to the marriage, they'd met with the rabbi separately and talked it through with him. The rabbi helped them gain a measure of conviction.
At the funeral, which like the wedding took place in upstate New York, a very senior American monk, Bhikkhu Bodhi, was the main speaker. The husband of my meditation center's chief teacher had died from cancer and was being cremated. Bhikkhu Bodhi had been a teacher to both the husband and the wife, and came at the wife's invitation. I remember that the cremation happened during the event, and though offstage, there was a sound of a big gas fire turning on. I was a little shocked when it happened. There was at least one audible gasp in the audience.
Bhikkhu Bodhi did not offer any explicit solace in his words. But he just had a lot of gravitas about him. He was neither flippant nor grave in his words or movements. He talked about his memories of the deceased, and then led us in some Buddhist chants with simple grace. When I drove back home afterwards, I felt a new quiet determination in me. To carry on.
Priests, unlike lawyers and doctors, don't play much of a role in extracting and applying information of some specialized kind. At least not in the 21st century, when everyone has access to religious texts and commentary. That is also why we never ask if priests can be replaced by AI. Lawyers and doctors can see themselves as performing a similar function in relation to their respective domains: to help us comport to the rule of law, and the reality of biology, as our lives interact with them.
Et tu?
And how about you and me? Are we afforded a role like lawyers and doctors in shepherding non-experts as their lives touch on our domains of expertise? Can we lean into the human side of our professions, rather than focusing just on the problem-solving side?
Let me see. I work in technology. My career has initially been in finance, and in recent years, AI. Technology has been a pervasive part of our lives for at least a decade, and has changed significant aspects of our day-to-day—witness the rise of remote work since the pandemic. I could, in my small way, help non-experts I meet in business and in personal life understand what AI can and cannot do well, and listen as they share their apprehension and excitement about the coming changes.
Also, I could make sure to keep writing this substack.
Now live at https://www.chatent.net. My company is at https://www.copula.ai.
And to the extent an AI system learns to perform this human role, we will start regarding that system as human-like, accruing the corresponding rights and responsibilities.
Frank has written a book about his experience, that is reviewed here: https://www.nytimes.com/2022/02/28/books/review/the-beauty-of-dusk-frank-bruni.html