Explainable AI: Who Will Cooperate to Make This Feasible?
December 21, 2018
i read “Capital One AI Chief Sees Path to Explainable AI.” Capital One is the company that runs the “what’s in your wallet?” campaign. The company also has made headlines like this “Compliance Weaknesses Cost Capital One $100M.”
The write up articulates a point of view that I have to admit Beyond Search has not considered possible. The idea is that the “black boxes” of artificial intelligence can be made more “interpretable.” Yes, I suppose like the discovery of the company’s compliance issues.
Here’s quote attributed to Capital One’s smart software guru:
“A good deep learning approach could give us more comfort that we know what’s happening in the system than having 1000 of these human-created rules, created over 20 years.”
“Good”, of course, is relative.
My view is that developers and some organizations do not want anyone—including certain employees—to know how some numerical procedures operate. At Amazon, Google, and Microsoft, senior managers have tried to partition information so that employees don’t go on protest marches and write critical tweets when applying smart software to war fighting.
Therefore, one can talk about transparency. One can argue that black boxes will become more open to scrutiny.
My hunch is that the idea is interesting, but it is unlikely particularly in the organizations which have the most to lose if what and how certain functions are accomplished are revealed.
What about home loan and credit policies based on numerical analysis? How much visibility do financial institutions provide when approving or rejecting a credit limit increase, a home loan, or some other credit centric financial transaction how?
What happens when smart software makes these decisions. Perhaps a look at “Weapons of Math Destruction” would be helpful?
Then consider secrecy as a business advantage. Just a couple of ideas to ponder when sipping spiked eggnog.
Stephen E Arnold, December 21, 2018