Who Should Watch Over Smart Software? No One. Self Regulation Is the Answer

March 11, 2021

I read an amusing academic paper article called “Someone to Watch Over AI and Keep It Honest – and t’s Not the Public!.” The idea is that self regulation works. Full stop. Ignoring the 737 Max event and Facebook’s legal move to get anti-trust litigation dumped, the write up reports:

Dr Bran Knowles, a senior lecturer in data science at Lancaster University, says: “I’m certain that the public are incapable of determining the trustworthiness of individual AIs… but we don’t need them to do this. It’s not their responsibility to keep AI honest.”

And what’s the smart software entity figuring prominently in the write up? Amazon, the Google, or Twitter?



The idea, at least in the construct of the cited article, is that trust is important. And whom does one trust?


How do I know there’s an element of trust required to accept this fine scholarly article?

Here’s a clue:

The paper is co-authored by John T. Richards, of IBM’s T.J. Watson Research Center, Yorktown Heights, New York.

Yep, the home of the game shown winner and arguably one of the few smart software systems to be put on a gurney and rolled out the door of a Houston, Texas medical facility.

But just in case the self regulation thing doesn’t work, the scholarly experts’ findings point to “a regulatory ecosystem.”

Yep, regulations. How’s that been working out in the last 20 years?

Why not ask IBM Watson?

Stephen E Arnold, March 11, 2021


Comments are closed.

  • Archives

  • Recent Posts

  • Meta