Today’s “artificial intelligence “ (AI) fad has commanded a lot of attention for several reasons, not all of them pleasant. AI programs, which like all software must incorporate unalterable premises, have attracted some attention for exhibiting inherent biases. Some such biases are embedded in the program’s source code; others are “trained in” by whoever is given the task of “educating” the program.
Some of those biases provoke AIs into emitting falsehoods. I think we can all agree that this is not good, especially as an increasing number of persons and institutions have been using AI programs as sources of reference material.
Elon Musk’s sentiments are on record:
That’s a noncontroversial statement of the matter, except for one word: superintelligence. There are no indications to this point that any AI program possesses a degree of cognition superior to that of an intelligent man, arguments about the inherent uncertainty and ambiguity of such judgments notwithstanding.
The capability AI programs possess that make them notable is their ability to absorb potentially unlimited quantities of digitized information and pseudo-information (e.g., opinions). Whatever is available to them in digital form, they can digest and incorporate into their “knowledge.” But knowledge – especially incorrect and disputable “knowledge” – is not intellect. A store of “knowledge” cannot reason. It possesses neither deductive nor inductive reasoning faculties. Thus also with AI software. Moreover, if such a program is unable to question, modify, or discard its premises, it is not above but beneath human-level intelligence.
The great hazard lies in the general public holding misconceptions about AI software. It does not think. It regurgitates “relevant material” from its “knowledge base,” under constraints built into it by its developers and trainers. (Another commentator dismissed current AI software as “fancy sentence completion programs,” with considerable justice.) But most persons, unacquainted with the principles upon which all digital engineering is founded, are easily deceived about this.
It would be good for the general public if persons knowledgeable about software were to spread that tidbit around.
6 comments
Skip to comment form
I had a friend that I went to school with who when asked a question in class was always wrong. We joked that we didn’t need to know the right answer all we needed to do was answer the opposite of what he said. That is exactly what I intend to do regarding AI. I might add that this is my same policy for whatever Democrats and left leaning media say. They are always lying to us.
Remember, Fran, it’s only GPT if it comes from the GPT region of California. Otherwise, it’s just sparkling Markov chains. 🙂
Author
(…facepalm…)
For AI to be HELPFUL to mankind, it has to reliably provide honest, if not correct answers. Those answers might just be:
Here is the arguments on one side, and on other sides.
No editorializing, no dealing from a rigged deck, no pushing particular agendas. Just – Here is the available evidence, with arguments, from multiple viewpoints.
Perhaps, also – here are the consequences for each decision – good and bad.
You, Human, decide.
AI has to be Mankind’s Servant, not its Master.
TGIF Francis and all.
Very serendipitously I did an essay on AI / robots… had that same Elon meme too.
Serena Butler was right about AI – Granite Grok
As I pointed out, if AI can do 99%+ of tasks better than people, functionally, that’ll be enough for most people. Even if AI doesn’t truly understand things.