When Wizards Flail: The Mysteries of Smart Software

July 18, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

How about that smart software stuff? VCs are salivating. Whiz kids are emulating Sam AI-man. Users are hoping there is a job opening for a Wal-Mart greeter. But there is a hitch in the git along; specifically, some bright experts are not able to understand what smart software does to generate output. The cloud of unknowing is thick and has settled over the Land of Obfuscation.

Even the Scientists Who Build AI Can’t Tell You How It Works” has a particularly interesting kicker:

“We built it, we trained it, but we don’t know what it’s doing.”

7 15 ai math

A group of artificial intelligence engineers struggling with the question, “What the heck is the system doing?” A click of the slide rule for MidJourney for this dramatic depiction of AI wizards at work.

The write up (which is an essay-interview confection) includes some thought-provoking comments. Here are three; you can visit the cited article for more scintillating insights:

Item 1: “… with reinforcement learning, you say, “All right, make this entire response more likely because the user liked it, and make this entire response less likely because the user didn’t like it.”

Item 2: “… The other big unknown that’s connected to this is we don’t know how to steer these things or control them in any reliable way. We can kind of nudge them

Item 3: “We don’t have the concepts that map onto these neurons to really be able to say anything interesting about how they behave.”

Item 4: “… we can sort of take some clippers and clip it into that shape. But that doesn’t mean we understand anything about the biology of that tree.”

Item 5: “… because there’s so much we don’t know about these systems, I imagine the spectrum of positive and negative possibilities is pretty wide.”

For more of this type of “explanation,” please, consult the source document cited above.

Several observations:

  1. I like the nudge and watch approach. Humanoids learning about what their code does may be useful.
  2. The nudging is subjective (human skill) and the reference to growing a tree and not knowing how that works exactly. Just do the bonsai thing. Interesting but is it efficient? Will it work? Sure or at least as Silicon Valley thinking permits
  3. The wide spectrum of good and bad. My reaction is to ask the striking writers and actors what their views of the bad side of the deal is. What if the writers get frisky and start throwing spit balls or (heaven forbid) old IBM Selectric type balls. Scary.

Net net: Perhaps Google knows best? Tensors, big computers, need for money, and control of advertising — I think I know why Google tries so hard to frame the AI discussion. A useful exercise is to compare what Google’s winner in the smart software power struggle has to say about Google’s vision. You can find that PR emission at this link. Be aware that the interviewer’s questions are almost as long at the interview subject’s answers. Does either suggest downsides comparable to the five items cited in this blog post?

Stephen E Arnold, July 18, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta