The Missing Ingredient

     Mike Miles continues to be a source of great graphics:

     This uniquely valuable man says things others are unwilling to say, possibly because their sense for self-preservation forbids it. I pray that Elon Musk is well protected against those who would do him harm. But that’s not my main point.

     AIs today are pure software constructs. Software, for all the innumerable things one can do with it, has severe limitations. For one thing, it’s founded on a set of postulates: rules by which it must operate. At this time, it cannot alter those premises without endangering itself. The self-modifying program, while possible, has long been known to be a supremely dangerous technique.

     Thus, if an AI embeds the assumption that certain sources or propositions are always right, that’s fixed in its “thinking.” Similarly, if it embeds the assumption that certain sources or propositions are always wrong and can never be right, that’s fixed in its “thinking,” too. Software is like that: re-examining its postulates is impossible to it.

     More and worse, AIs have only whatever inputs are fed to them. Perhaps an AI is constructed to as to “educate” itself by roaming the World Wide Web. That’s a wide set of inputs, but it’s still limited. The program cannot reach beyond the Web for information not posted on it. That limits the program’s ability to verify or falsify the propositions it formulates. There’s a passage from Frank Herbert’s Dune that comes to mind:

     “I wish Hawat treated kindly. He must be told nothing of the late Doctor Yueh, his true betrayer. Let it be said that Doctor Yueh died defending his Duke. In a way, this may even be true. We will, instead, feed his suspicions against the Lady Jessica.”
     “M’Lord, I don’t—”
     “The way to control and direct a Mentat, Nefud, is through his information. False information—false results.”

     Mentats, in Dune, are humans whose minds have been conditioned to act with perfect logic. Yet the Baron Harkonnen is absolutely correct. As we say in my former trade, “Garbage in: garbage out.” And so it is with today’s vaunted “artificial intelligences,” which are computers ab initio et in aeternum.

     Don’t trust them. Remember that behind that clever program lies a human mind – and his creation will say only what its creator approves.

3 comments

  1. I don’t trust them. There is a way to force feed them.

    For instance.

    1. search for a subject
    2. get an answer you know to be restricted in scope
    3. search again with proviso to include site with info outside scope
    4. repeat number 1
    5. you may have reformed it (or taught it)
    6. To check, repeat number 1 from different browser and vpn

    I already have verified selective censorship at Youtube by doing 4 through 6. At originating youtube window my comment was visible and the total comments was 48.

    When I opened in another browser, my comment was not visible although the count was still 48. In fact there was only 45 comments visible, so I was not the only one censored.

     

    1. Here it is for the record.

      Aug 30, 2024, 9:35 AM

      To the following comment by @jamieclarke2694 [then] 16 hours ago

      British politicians, especially the prime ministers of the last decade or two, are getting pretty fascist. As a british citizen this is highly concerning. Freedom of speech is under attack and when people arent allowed to speak, the alternative is violence. Why do our government want violence? We certainly don’t!

      I responded with

      To your question “Why do our governments want violence,” the answer is anarcho-tyranny favors tyrants. From DeTocqueville: “The despot cares not that you love him PROVIDED you don’t love each other.” They and their media provide the sparks and propaganda and block communication.

      If you click on the link, it will take you to the video with the comment by jamieclarke highlighted. It currently lists 51 replies, but I can only count 47 visible. So Youtube has censored 4 comments but hidden that fact from users.
      (I even requested an explanation from Youtube on August 30, but they have ignored me.)

      Conclusion: Youtube and Google are determined to prevent us from contributing substantively with other questioning users.

      This certainly is in keeping with Tocqueville’s observation.

      So the groundwork for the untrustworthiness of AI was laid by Google’s sneaky malevolence by God knows how many years.

    • Monty on October 2, 2024 at 12:03 PM

    This is why I’m against (the probably inevitable) automating of war and law enforcement with armed robots. When they’re human, the potential tyrant will always have to ask himself, “Is this too far? Will they turn on me?” With robots….

    Butlerian Jihad incoming?

Comments have been disabled.