AI Is Not the Only System That Hallucinates

April 7, 2023

Vea4_thumb_thumbNote: This essay is the work of a real and still-alive dinobaby. No smart software involved, just a dumb humanoid.

I personally love it when software goes off the deep end. From the early days of “Fatal Error” to the more interesting outputs of a black box AI system, the digital comedy road show delights me.

I read “The Call to Halt ‘Dangerous’ AI Research Ignores a Simple Truth” reminds me that it is not just software which is subject to synapse wonkiness. Consider this statement from the Wired Magazine story:

… there is no magic button that anyone can press that would halt “dangerous” AI research while allowing only the “safe” kind.

Yep, no magic button. No kidding. We have decades of experience with US big technology companies’ behavior to make clear exactly the trajectory of new methods.

I love this statement from Wired Magazine no less:

Instead of halting research, we need to improve transparency and accountability while developing guidelines around the deployment of AI systems. Policy, research, and user-led initiatives along these lines have existed for decades in different sectors, and we already have concrete proposals to work with to address the present risks of AI.

Wired was one of the cheerleaders when it fired up its unreadable pink text with orange headlines in 1993 as I recall. The cheerleading was loud and repetitive.

I would suggest that “simple truth” is in short supply. In my experience, big technology savvy companies will do whatever they can do to corner a market and generate as much money as possible. Lock in, monopolistic behavior, collusion, and other useful tools are available.

Nice try Wired. Transparency is good to consider, but big outfits are not in the let the sun shine in game.

Stephen E Arnold, April 7, 2023


Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta