Explaining Deep Learning

June 30, 2018

I read an interesting analysis of one approach to deep learning. “The Matrix Calculus You Need For Deep Learning.” This is a useful write up and one worth reading. I want to point out that I noted one statement as quite interesting.

Here’s the statement I highlighted:

How does the product xy change when we wiggle the variables?

The operative words are:

  • Change
  • Wiggle
  • Variables

My thoughts ran along this rather simplistic line:

If humans wiggle variables to induce change, then automated systems need a cowpoke to ride herd.

If software operations wiggle variables to induce change, then inputs can have an impact.

Black box methods are difficult to figure out because humans are not keen on revealing what the cowpokes do to get the system operating in an acceptable manner. If the black box is automatic and really smart, humans have a tough time figuring out what occurrence A generated action B.

Figuring out if algorithms and numerical recipes are biased may prove to be a challenge. Explaining reveals how. Not explaining may reveal that smart methods are only as smart as the cowpokes who herd the numbers.

Stephen E Arnold, June 30, 2018

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta