Nitzavim: The Challenge to Artificial Halakhic Authorities
Rabbi Daniel Z. Feldman
The crucial current question of whether robot may replace rabbis, of Artificial Intelligence taking on the role of deciding Jewish law, is greatly impacted by a verse in this week’s Torah reading. The Talmud makes it clear, in a widely cited passage, that God has chosen to involve humanity in the halakhic process, and in fact to grant the finalization of that process to them, as it is no longer “in Heaven” (Deut. 30:12, with Bava Metzia 59b). Presumably, this role should not be ceded to a computer; it thus behooves us to identify what part of the halakhic process necessarily must be preserved for human involvement.
The Talmud (Shabbat 88b) relates that when Moses went to Mount Sinai to receive the Torah, he met opposition from the Angels, who argued that the Torah should not be placed in the hands of mere mortals. Moses argues to defend his people, and ultimately prevails, asserting that it is only human beings who have the physical needs that the commandments of the Torah speak to, eating, drinking and like, and who struggle with temptations and emotions such as jealously and animosity.
There is, as some note, something difficult about the story. Why did the Angels care about the Jews receiving the Torah? In what way would that impact their own possession of its treasures? Why not allow it to be the case that both the angels and the mortal Jews should be permitted the beauty of Torah study?
One suggestion to resolve this issue, building on the words of the medieval scholar Rabbenu Nissim (Derashot HaRan, 11), is to note that the handover of the Torah to the Jewish people at this stage reflected much more than the license to study its content: it represented the partnership that God was making with his people, to determine the direction of Jewish law, a trust that can only be given to the human beings who live the lives the Torah talks to.
R. Chaim Yaakov Goldvicht, in his lectures (Assufat Ma’arakhot, Ex. pp. 333-358) expanded on this concept in the context of a broad philosophical perspective. God created the world, with all of its challenges and choices, obstacles and opportunities, so that His glory could be reflected through His creations’ engagement with those realities. The elevation of the physical world through the application of Divine wisdom is the mission of human beings, who struggle with navigating its peaks and valleys and experiencing holiness through that encounter. The conclusions of Jewish law are necessarily and exclusively the products of those interactions. (See also R. Asher Weiss, Introduction to Responsa Minchat Asher, Vol. I, and R. Elyakim Devorkes, Tzfunot HaParshah, pp. 480-485)
Applying this understanding of “Not in Heaven” can shed light on some other surprising applications. R. Yosef Karo in his commentary to Maimonides’ Code, Kessef Mishneh (Hilkhot Tuma’at Tzara’at 2:9), in explaining a ruling there, asserts that a deathbed ruling by a scholar, even when cogent, should also not be accepted; “Not in Heaven” includes even human beings on their way there.
The Chatam Sofer (Responsa, OC 208) objects to this claim as groundless (and offers his own explanation for the “Not in Heaven” policy). However, based on the above, it might be understood: even a human scholar is excluded from deciding Jewish law if he has already been removed from human concerns, and will not have to grapple with the real world implications of the decision.
This element is itself among the objections that have been brought to the very possibility of machines actually rivalling human thought. As Daniel L Everett put it in his essay “The Airbus and the Eagle” (in the volume, What To Think About Machines That Think: Today’s Leading Thinkers on the Age of Machine Intelligence, edited by John Brockman), “the more we learn about cognition, the stronger becomes the case for understanding human thinking as the nexus of several factors, as the emergent property of the interaction of the human body, human emotions, culture, and the specialized capacities of the entire brain.... We learn to reason in a cultural context, whereby culture means a system of violable, ranked values, hierarchically structured knowledges, and social roles. We can do this not only because we have an amazing ability to perform what appears to be Bayesian inferencing across our experiences but also because of our emotions, our sensations, our proprioception, and our strong social ties. There's no computer with cousins and opinions about them. Computers may be able to solve a lot of problems. But they cannot love...”
Philosopher Thomas Metzinger in his essay, “What if they need to suffer?”, wrote, “human thinking is efficient because we suffer so much. High-level cognition is one thing, intrinsic motivation another. Artificial thinking might soon be much more efficient- but will it be associated with suffering in the same way?… human beings have fragile bodies, are born into dangerous social environments, and find themselves in a constant uphill battle to deny their own mortality. Our brains continually fight to minimize the likelihood of ugly surprises. We’re smart because we hurt, because we can regret, and because of our constant striving to find some viable form of self-deception or symbolic immortality. The question is whether good AI also needs fragile hardware, insecure environments, and an inbuilt conflict with impermanence.“
The possibility that genuine human experience is an indispensable part of the decision-making process of crucial issues of life is also expressed by Shannon Vallor in her book The AI Mirror (pp. 24-25). She begins by asserting that “we ought to regard AI today as intelligent only in a metaphorical or loosely thrived sense. Intelligence is a name for our cognitive abilities to carefully cope in the world … Intelligence in a being that has no world to experience is like sound in a vacuum. It’s impossible, because there’s no place for it to be.”
These comments go to the question of defining intelligence, an ongoing vital inquiry in the era of AI. The premise that humanity is central to that definition is being evaluated in the process. What is not up for debate, our tradition teaches, is the irreplaceable role humanity plays in making that tradition what it is.