AI Weird? Who Knew?

August 29, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

Captain Obvious here. Today’s report comes from the IEEE, an organization for really normal people. Oh, you are not an electrical engineer? Then, you are not normal. Just ask an EE and inquire about normalcy?

Enough electrical engineer humor. Oh, well, one more: Which is a more sophisticated engineer? [a] Civil, [b] Mechanical, [c] Electrical, [d] Nuclear. The answer is [d] nuclear. Why? You have to be able to do math, chemistry, and fix a child’s battery powered toy. Get it? I must admit that I did not when Dr. James Terwilliger told it to me when I worked at the Halliburton nuclear outfit. Never heard of it? Well, there you go. Just ask a chatbot to fill you in.

I read “Why Today’s Chatbots Are Weird, Argumentative, and Wrong.” The IEEE article is going to create some tension in engineering-forward organizations. Most of these outfits are in the words of insightful leaders like the stars of the “All In” podcast. Booze, money, gambling, and confidence — a heady mixture indeed.

What does the write up say that Captain Obvious did not know? That’s a poor question. The answer is, “Not much.”

Here’s a passage which received the red marker treatment from this dinobaby:

[Generative AI services have] become way more fluent and more subtly wrong in ways that are harder to detect.

I love the “way more.” The key phrase in the extract, at least for me, is: “Harder to detect.” But why? Is it because developers are improving their generative systems a tweak and a human judgment at a time. The “detect” folks are in react mode. Does this suggest that at least for now the cat-and-mouse game ensures an advantage to the steadily improving generative systems. In simple terms, non-electrical engineers are going to be “subtly” fooled? It sure does.

A second example of my big Japanese chunky marker circling behavior is this snippet:

The problem is the answers do look vaguely correct. But [the chatbots] are making up papers, they’re making up citations or getting facts and dates wrong, but presenting it the same way they present actual search results. I think people can get a false sense of confidence on what is really just probability-based text.

Are you getting a sense that if a person who is not really informed about a topic will read baloney and perceive it as a truffle?

Captain Obvious is tired of this close reading game. For more AI insights, just navigate to the cited IEEE article. And be kind to electrical engineers. These individuals require respect and adulation. Make a misstep and your child’s battery powered toy will never emit incredibly annoying squeaks again.

Stephen E Arnold, August 29, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta