AI Allegedly Doing Its Thing: Let Fake News Fly Free

June 2, 2023

Vea4_thumb_thumb_thumb_thumb_thumb_t[1]Note: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I cannot resist this short item about the smart software. Stories has appeared in my newsfeeds about AI which allegedly concluded that to complete its mission, it had to remove an obstacle — the human operator.

A number of news sources reported as actual factual that a human operator of a smart weapon system was annoying the smart software. The smart software decided that the humanoid was causing a mission to fail. The smart software concluded that the humanoid had to be killed so the smart software could go kill more humanoids.

I collect examples of thought provoking fake news. It’s my new hobby and provides useful material for my “OSINT Blindspots” lectures. (The next big one will be in October 2023 after I return from Europe in late September 2023.)

However, the write up “US Air Force Denies AI Drone Attacked Operator in Test” presents a different angle on the story about evil software. I noted this passage from an informed observer:

Steve Wright, professor of aerospace engineering at the University of the West of England, and an expert in unmanned aerial vehicles, told me jokingly that he had “always been a fan of the Terminator films” when I asked him for his thoughts about the story. “In aircraft control computers there are two things to worry about: ‘do the right thing’ and ‘don’t do the wrong thing’, so this is a classic example of the second,” he said. “In reality we address this by always including a second computer that has been programmed using old-style techniques, and this can pull the plug as soon as the first one does something strange.”

Now the question: Did smart software do the right thing. Did it go after its humanoid partner? In a hypothetical discussion perhaps? In real life, nope. My hunch is that the US Air Force anecdote is anchored in confusing “what if” thinking with reality. That’s easy for some younger than me to do in my experience.

I want to point out that in August 2020, a Heron Systems’ AI (based on Google technology) killed an Air Force “top gun” in a simulated aerial dog fight. How long did it take the smart software to neutralize the annoying humanoid? About a minute, maybe a minute and a half. See this Janes new item for more information.

My view is that smart software has some interesting capabilities. One scenario of interest to me is a hacked AI-infused weapons system? Pondering this idea opens the door some some intriguing “what if” scenarios.

Stephen E Arnold, June 2, 2023

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta