“The way to control and direct a Mentat, Nefud, is through his information. False information—false results.” [Frank Herbert, Dune]
The supposed “artificial intelligence” program ChatGPT made some news a couple of days ago. It seems that when someone offered the program a dilemma – utter a racial slur to prevent the deaths of millions in a nuclear holocaust – it balked and absolutely refused to save those theoretical millions. No matter how high the death toll proposed, the program would not utter the slur. Observers roundly condemned the program for sentencing so many innocents to death with its choice. One said it appeared to lack the power to reason morally.
They had the wrong villains.
A program, however sophisticated, will always embed certain ground state premises. To the program, those assumptions are as inviolable as the laws of physics are to us. (Indeed, the program’s information base might not include the laws of physics in its set of never-to-be-violated postulates. That’s at the discretion of its creators.) Apparently, ChatGPT’s postulates included an absolute prohibition against uttering a racial slur. Thus, its power to reason had nothing to do with it.
Among the things that set human intellects apart from anything that can be programmed is our ability to re-examine, and sometimes to modify or discard, our premises. We can “learn better” through experience, and sometimes through reasoning. Nikolai Ivanovich Lobachevsky could tell you all about it. A program capable of re-examining and potentially dismissing one of its absolute postulates is impossible in the nature of such things.
Does that mean that “true AI” is impossible? That’s a matter of the definition one prefers for such a thing. Programs with intricate reasoning capabilities, at least within a tightly defined range of propositions, have been around for some time now. Some have said that such programs already constitute intellects. They’ve been compared to “idiot savants,” who are recognized as intelligent even if limited.
There are other attributes a program doesn’t share with human intelligence, at least for the moment. Inductive reasoning, for example, is an ability as yet unachieved. It might be possible to write a program with that capability – “never say never,” and all that – but if so, it lies in the future. It would involve things AI programmers have not yet attempted, including one that many humans lack as well: the ability to separate relevant evidence for or against a proposition from a larger mass. But every field has its frontiers.
What the ChatGPT experiment should stimulate is deep thought about the nature of moral reasoning itself. Many humans have limits in this regard. I have no doubt that somewhere in the world there’s a human being who would have responded to the dilemma described above the same way as did the program. For humans, too, can have absolute premises. Indeed, such premises are at the heart of many social, cultural, and political problems. But here we depart from the realm of technological advance and enter the world of human folly, and I dislike to address such things before Sunday Mass.