Smart Software: An Annoying Flaw Will Not Go Away

December 22, 2016

Machines May Never Master the Distinctly Human Elements of Language” captures one of the annoying flaws in smart software. Machines are not human—at least not yet. The write up explains that “intelligence is mysterious.” Okay, big surprise for some of the Sillycon Valley crowd.

The larger question is, “Why are some folks skeptical about smart software and its adherents’ claims?” Part of the reason is that publications have to show some skepticism after cheerleading.  Another reason is that marketing presents a vision of reality which often runs counter to one’s experience. Try using that voice stuff in a noisy subway car. How’s that working out?

The write up caught my attention with this statement from the Google, one of the leaders in smart software’s ability to translate human utterances:

“Machine translation is by no means solved. GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page.”

The write up quotes a Stanford wizard as saying:

She [wizard Li] isn’t convinced that the gap between human and machine intelligence can be bridged with the neural networks in development now, not when it comes to language. Li points out that even young children don’t need visual cues to imagine a dog on a skateboard or to discuss one, unlike machines.

My hunch is that quite a few people know that smart software works in some use cases and not in others. The challenge is to get those with vested interests and the marketing millennials to stick with “as is” without confusing the “to be” with what can be done with available tools. I am all in on research computing, but the assertions of some of the cheerleaders spell S-I-L-L-Y. Louder now.

Stephen E Arnold, December 22, 2016

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta