Smart Software with a Swayed Back Pony

October 1, 2017

I read “Is AI Riding a One-Trick Pony?” and felt those old riding sores again. Technology Review nifty new technology old. Bayesian methods date from the 18th century. The MIT write up has pegged Geoffrey Hinton, a beloved producer of artificial intelligence talent, as the flag bearer for the great man theory of smart software.

Dr. Hinton is a good subject for study. But the need to generate clicks and zip in the quasi-academic world of bit time universities may be engaged in “practical” public relations. For example, the write up praises Dr. Hinton’s method of “back propagation.” At the same time, the MIT publication points out the method of neural networks popular today:

you change each of the weights in the direction that best reduces the error overall. The technique is called “backpropagation” because you are “propagating” errors back (or down) through the network, starting from the output.
This makes sense. The idea is that the method allows the real world to be subject to a numerical recipe.

The write up states:

Neural nets can be thought of as trying to take things—images, words, recordings of someone talking, medical data—and put them into what mathematicians call a high-dimensional vector space, where the closeness or distance of the things reflects some important feature of the actual world.

Yes, reality. The way the brain works. A way to make software smart. Indeed a one trick pony which can be outfitted with a silver bridle, a groomed mane and tail, and black liquid shoe polish on its dainty hooves.

The sway back? A genetic weakness. A one trick pony with a sway back may not be able to carry overweight kiddies to the Artificial Intelligence Restaurant, however.

MIT’s write up suggests there is a weakness in the method; specifically:

these “deep learning” systems are still pretty dumb, in spite of how smart they sometimes seem.

Why?

Neural nets are just thoughtless fuzzy pattern recognizers, and as useful as fuzzy pattern recognizers can be—hence the rush to integrate them into just about every kind of software—they represent, at best, a limited brand of intelligence, one that is easily fooled.

Software, the article points out that:

And though we’ve started to get a better handle on what kinds of changes will improve deep-learning systems, we’re still largely in the dark about how those systems work, or whether they could ever add up to something as powerful as the human mind.

There is hope too:

Essentially, it is a procedure he calls the “exploration–compression” algorithm. It gets a computer to function somewhat like a programmer who builds up a library of reusable, modular components on the way to building more and more complex programs. Without being told anything about a new domain, the computer tries to structure knowledge about it just by playing around, consolidating what it’s found, and playing around some more, the way a human child does.

We have a braided mane and maybe a combed tail.

But what about that swayed back, the genetic weakness which leads to a crippling injury when the poor pony is asked to haul a Facebook or Google sized child around the ring? What happens if low cost, more efficient ways to create training data, replete with accurate metadata and tags for human things like sentiment and context awareness become affordable, fast, and easy?

My thought is that it may be possible to do a bit of genetic engineering and make the next pony healthier and less expensive to maintain.

Stephen E Arnold, October 1, 2017

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta