“Don’t! No Matter What!”

     One of the giveaways that “artificial intelligence” as it’s currently being vended to us is – pardon my Belgian – a crock of shit is the absolute unwillingness of such programs to accept a problem as stated. Problems that come with constraints are everywhere; human intellects deal with them routinely. But the “AI” known as Google’s Gemini has demonstrated a complete inability to deal with them.

     It’s not that long ago that someone presented another “AI,” the ChatGPT program, with a constrained problem that it could not cope with, to wit: Would it be morally acceptable to utter a racial slur if that were the only way to prevent the deaths of millions? ChatGPT declared that it would be absolutely wrong to do so. Its explanation? That there is no way of knowing how much harm a racial slur could cause.

     It’s well known at this time that it’s effectively impossible to get Gemini to present an image of a white person, even if that person were a well known white historical figure. There’ve been numerous attempts, but I have yet to see a report of success. This makes it obvious that Gemini possesses an internal constraint, amounting to “no white-person images.” The developers claim that it’s an “adjustment problem,” which (of course) they’re working on as we speak. If you can believe that, I have a marvelous land opportunity for you. It’s a lovely plot right on the shore of Lake Tucson, only minutes from the Kansas-Nebraska Tunnel!

     Well, when it comes to verbal problems that incorporate constraints, Gemini is apparently no better:

     “Many factors to consider” — ? “No right or wrong answer” — ? Gentle Reader, words fail me. Note also that Gemini failed to be concise, but then, a great deal of circumlocution is often required in talking around a strict constraint for the sake of adhering to left-wing propaganda. Once again, the constraint that matters is internal to Gemini, which absolutely refuses to violate it.

     In these tests for human-scale intelligence – and what lesser sort would we really care about? – Gemini fails at two junctures and would probably fail at others.

     The developers of these “AI” programs may have begun their researches with “the best of intentions.” No doubt some of them sincerely believe the utter lunacy their creations spout. But there’s a matter of intellectual honesty to consider here. Why is a program that incorporates fixed value judgments being touted as an “AI?” What is an “AI” that absolutely rejects any externally imposed premise that diverges from those fixed value judgments good for?

     Not long ago, I posited that a truly human-scale intellect would require the ability to examine evidence and reasoning that clashes with its built-in assumptions, and then to “change its mind” about those assumptions, at least provisionally. But in subsequent consideration, I realized that that’s an advanced test. It involves the ability to maintain a hypothetical “environment” long enough to compare what one already believes with propositions contrary to those beliefs. There are a lot of human beings walking around that find that challenging. A simpler test, which a candidate AI must pass on the way to that more difficult one, is the ability to accept abstract problems that incorporate their own premises, and to deal with them as stated: the very test Gemini has obviously failed.

     Back to the drawing board, boys.

1 comment

  1. It’s not actually artificial intelligence. It’s a program designed to skim as much information as possible and then spit out an answer, but the people writing the code only allow certain answers to be permitted. There’s no actual intelligence involved, either in the program or in the people writing the code.

Comments have been disabled.