Technology and AI: A Good Enough and Opaque Future for Humans

August 9, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_tNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

What Self Driving Cars Tell Us about AI Risks” provides an interesting view of smart software. I sensed two biases in the write up which I want to mention before commenting on the guts of the essay. The first bias is what I call “engineering blindspots.” The idea is that while flaws exist, technology gets better as wizards try and try again. The problem is that “good enough” may not lead to “better now” in a time measured by available funding. Therefore, the optimism engineers have for technology makes them blind to minor issues created by flawed “decisions” or “outputs.”

7 31 wrong data

A technology wizard who took classes in ethics (got a gentleperson’s “C”, advanced statistics (got close enough to an “A” to remain a math major), and applied machine learning experiences a moment of minor consternation at a smart water treatment plant serving portions of New York City. The engineer looks at his monitor and says, “How did that concentration of 500 mg/L of chlorine get into the Newtown Creek Waste Water Treatment Plant?” MidJourney has a knack for capturing the nuances of an engineer’s emotions who ends up as a water treatment engineer, not an AI expert in Silicon Valley.

The second bias is that engineers understand inherent limitations. Non engineers “lack technical comprehension” and that smart software at this time does not understand “the situation, the context, or any unobserved factors that a person would consider in a similar situation.” The idea is that techno-wizards have a superior grasp of a problem. The gap between an engineer and a user is a big one, and since comprehension gaps are not an engineering problem, that’s the techno-way.

You may disagree. That’s what makes allegedly honest horse races in which stallions don’t fall over dead or have to be terminated in order to spare the creature discomfort and the owners big fees.

Now what about the innards of the write up?

  1. Humans make errors. This begs the question, “Are engineers human in the sense that downstream consequences are important, require moral choices, and like the humorous medical doctor adage “Do no harm”?
  2. AI failure is tough to predict? But predictive analytics, Monte Carlo simulations, and Fancy Dan statistical procedures like a humanoid setting a threshold because someone has to do it.
  3. Right now mathy stuff cannot replicate “judgment under uncertainty.” Ah, yes, uncertainty. I would suggest considering fear and doubt too. A marketing trifecta.
  4. Pay off that technical debt. Really? You have to be kidding. How much of the IBM mainframe’s architecture has changed in the last week, month, year, or — do I dare raise this issue — decade? How much of Google’s PageRank has been refactored to keep pace with the need to discharge advertiser paid messages as quickly as possible regardless of the user’s query? I know. Technical debt. No an issue.
  5. AI raises “system level implications.” Did that Israeli smart weapon make the right decision? Did the smart robot sever a spinal nerve? Did the smart auto mistake a traffic cone for a child? Of course not. Traffic cones are not an issue for smart cars unless one puts some on the road and maybe one on the hood of a smart vehicle.

Net net: Are you ready for smart software? I know I am. At the AutoZone on Friday, two individuals were unable to replace the paper required to provide a customer with a receipt. I know. I watched for 17 minutes until one of the young professionals gave me a scrawled handwritten note with the credit card code transaction number. Good enough. Let ‘er rip.

Stephen E Arnold, August 9, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta