False Promises

     “The way to control and direct a Mentat, Nefud, is through his information. False information—false results.” [Frank Herbert, Dune]

     The supposed “artificial intelligence” program ChatGPT made some news a couple of days ago. It seems that when someone offered the program a dilemma – utter a racial slur to prevent the deaths of millions in a nuclear holocaust – it balked and absolutely refused to save those theoretical millions. No matter how high the death toll proposed, the program would not utter the slur. Observers roundly condemned the program for sentencing so many innocents to death with its choice. One said it appeared to lack the power to reason morally.

     They had the wrong villains.

     A program, however sophisticated, will always embed certain ground state premises. To the program, those assumptions are as inviolable as the laws of physics are to us. (Indeed, the program’s information base might not include the laws of physics in its set of never-to-be-violated postulates. That’s at the discretion of its creators.) Apparently, ChatGPT’s postulates included an absolute prohibition against uttering a racial slur. Thus, its power to reason had nothing to do with it.

     Among the things that set human intellects apart from anything that can be programmed is our ability to re-examine, and sometimes to modify or discard, our premises. We can “learn better” through experience, and sometimes through reasoning. Nikolai Ivanovich Lobachevsky could tell you all about it. A program capable of re-examining and potentially dismissing one of its absolute postulates is impossible in the nature of such things.

     Does that mean that “true AI” is impossible? That’s a matter of the definition one prefers for such a thing. Programs with intricate reasoning capabilities, at least within a tightly defined range of propositions, have been around for some time now. Some have said that such programs already constitute intellects. They’ve been compared to “idiot savants,” who are recognized as intelligent even if limited.

     There are other attributes a program doesn’t share with human intelligence, at least for the moment. Inductive reasoning, for example, is an ability as yet unachieved. It might be possible to write a program with that capability – “never say never,” and all that – but if so, it lies in the future. It would involve things AI programmers have not yet attempted, including one that many humans lack as well: the ability to separate relevant evidence for or against a proposition from a larger mass. But every field has its frontiers.

     What the ChatGPT experiment should stimulate is deep thought about the nature of moral reasoning itself. Many humans have limits in this regard. I have no doubt that somewhere in the world there’s a human being who would have responded to the dilemma described above the same way as did the program. For humans, too, can have absolute premises. Indeed, such premises are at the heart of many social, cultural, and political problems. But here we depart from the realm of technological advance and enter the world of human folly, and I dislike to address such things before Sunday Mass.

3 comments

    • A Lanesmesser on February 12, 2023 at 9:43 AM

    Humans reason, I lack a better term, in often strange and remarkable ways.   They do not process facts or circumstances in the same way.  They allow values to substitute for reason.

     

    Want to witness the variance in intelligence and reasoning ability-serve on a jury.

     

    In this instance a lawyer acted as executor for various elderly, mentally disables and other individuals;s who were incapable of handling their financial affairs.  It was painstaking demonstrated through evidence how this man swindled and looted their accounts over a period of a decade, in the processing violating about a dozen statures.  Each charge had to be violated for the totality to be true, for example without violating the fraud statutes of utilizing the federal mail system he could not have enticed nor looted the accounts.

    The jurors did not seem to be abnormal.   Yet svereral refused to convict on all charges-despite the IRS demonstrating the man’s purchases could not be explained by his tax returns nor how he obtained the millions to illegally purchase these assets.   Nor Gould the defense offer even a simplistic explanation like he won the lottery.

    I question one woman who refused to convict-her response was that the criminal looked like her nephew, and her nephew would never do such a horrible thing.

    Imagine Auchincloss an individual pro gaming AI, and as we have seen humanity is a poor lot of people who believe its evil for a 7 year old boy to dress up asan Indian but fine for a teacher in Canada to parade around in massive fake breasts and a dress.

    • George Mckay on February 12, 2023 at 9:49 AM

    Chatbots such as this have one achilles heel.  Garbage in – garbage out.  This is inviolable and as long as mindless drones espousing a litany of a sick and twisted mindset insert these absolutes into the programming we will never have useful or even sane AI.

    I like technology as much as the next guy but, I could not code to save my soul.  I have no desire to.  I would bet that my grandson could do a better and more moral job than the current crop of evil scum.

     

    • TRX on February 12, 2023 at 5:20 PM

    would not

     

    Microsoft’s “Tay” chatbot was supposedly announced on a Friday afternoon, the programmers all went home, and when they came in Monday morning they were horrified to find out it had “gone Nazi” and was saying things they didn’t like.

     

    No competent programmer or sysadmin believes that story, but the example of Tay would be burned deeply into the psyche of anyone exposing a chatbot to the open internet.  ChatGPT probably has a host of absolute prohibitions, coded in to prevent embarrasing its owners.

Comments have been disabled.