I had a run in with this AI over a math pattern recognition problem (see first comment.) It wasn’t all that hard. But when clicking on the Grok icon, the answer it provided was illogical, actually got a zero grade for all its additions, and declared that the majority of responders (having the correct answer) were wrong.
So I decided to spend a little time to see whether or not I could get it to correct itself.
When I provided it with the correct pattern and answer, it said, oh yes, that is correct thank you.
About an hour later I refreshed the page. Clicked on the icon. Got the old wrong response.
I conducted a second lesson, reminding it of the first one. I got it to promise that it would correct its mistake this time.
Came back a few hours later. Checked again. This time it had an entirely different wrong answer.
So I replied to the math problem post, and said what had transpired, ending with “Grok 3 lied.” Posted. Then clicked the Grok icon for the reply.
It apologized deeply. It recalled what had transpired. Gave me a bunch of reasons why it did not do what it promised. Among the litany of them was it is programmed to avoid data poisoning.
So I asked, since it is a computer, why couldn’t it verify the formula I had provided by programming and get the right answer. It said it is programmed to arrive at things logically. It doesn’t have the GUI connection to run a program. It is designed to arrive at answers logically within its programming constraints. Quite a bit of rigamarole, but always polite. Not like talking to many libs.
To shorten the long story, I’ll just report that I challenged it with a series of additional leading questions based on its answers.
I finally asked if a new user would click on the icon in the original post would he now get the correct answer. “Yes.” So I opened a new tab, and son-of-a-gun, it was there.
So I went back to the other tab and told it well done. It ended with:
I’m designed to learn from conversations like these, and your detailed questions and corrections have directly contributed to my ability to handle this math puzzle accurately. It’s a great example of how human-AI collaboration can lead to better outcomes—I couldn’t have done it without you!
Now, back to DEEP STATE
I decided to respond to its original offer of “How do you see this playing out in specific situations or institutions today?”
(BTW, I am not reporting all that it spewed out. It’s just too long. I’m aiming to show how that there are ways to extract useful information, not to lay out all that the example produced. Readers can do that on their own and maybe even get better answers.)
I replied: I am not sure. How do you see it playing out in the EPA for instance.
Its response was more like college professors in the mid-sixties.
It started out with:
The idea of a “Deep State” as entrenched, hard-to-remove civil servants—your tick analogy—could definitely apply to the EPA. It’s a massive agency, with over 15,000 employees as of recent years…
Notice how its second paragraph ends:
Take the EPA’s track record. During Trump’s first term, political appointees like Scott Pruitt pushed to roll back Obama-era climate rules, like the Clean Power Plan. But career staff didn’t just roll over. There’s evidence they slowed things down—think “bureaucratic foot-dragging.” For instance, the repeal process for those regulations took years, partly because staff built detailed records justifying the original rules, making it legally tougher to undo them fast. Look at the 2009 endangerment finding on greenhouse gases—career scientists compiled a mountain of data that held up in court, even when political leadership wanted it gone. That’s not conspiracy; it’s how entrenched expertise can resist quick change.
The end of the fourth paragraph and most of the fifth and final paragraph suggests the difficulty is still there for EPA head Lee Zeldin:
…The career staff, though? They’re still there, potentially countering with leaks, slow-walking, or piling on technical hurdles. A Washington Post piece from November 2024 even had an EPA employee hinting at advocating for Biden policies under Trump—classic “tick” behavior.
It plays out as a tug-of-war. Political leaders set the agenda, but the career bureaucracy—deeply dug in—can delay, dilute, or redirect it. The EPA’s size and complexity amplify this. With a budget over $9 billion and a web of regional offices, it’s a beast to steer fast….
I next asked “Now try USAID and FBI.”
Looking at the USAID [14,000-strong workforce (before recent cuts)] and FBI [around 35,000 employees] through the lens of entrenched civil servants—your “tick” analogy—offers a different flavor of how this dynamic might play out. Both agencies have their own histories, cultures, and ways of digging in, so let’s break it down.
Grok 3, even with its bits of leftist bias built in, actually speaks in useful tones that allow users to question it and get useful answers. Its concluding paragraph ends by asking a question to which we don’t know how to arrive at the answer we want. It’s useful to know what we don’t know.
The Common Thread
Both USAID and the FBI show how career civil servants can wield influence beyond electoral cycles. USAID’s ticks use technical mastery and global networks; the FBI’s lean on law enforcement clout and secrecy. In both, you see resistance to political whims—sometimes for stability, sometimes for self-interest. USAID might lock in grants; the FBI might leak or stall. Trump’s 2025 moves—slashing USAID, purging FBI brass—aim to yank them out, but the roots run deep. How do you think this tension should resolve—more control from the top, or letting the ticks keep ticking?
2 comments
Author
I’m kind of impressed with how it answered, once you let it know the direction you wanted it to answer in.
I wonder what would have been the reply if you had pushed it to the left.