An Algorithm for Fairness and Bias Checking
July 16, 2018
I like the idea of a meta algorithm. This particular meta algorithm is described in “New Algorithm Limits Bias in Machine Learning.” The write up explains what those not working with smart software have known for—what is it?—decades? A century? Here’s the explanation of what happens when algorithms are slapped together:
But researchers have found that machine learning can produce unfair determinations in certain contexts, such as hiring someone for a job. For example, if the data plugged into the algorithm suggest men are more productive than women, the machine is likely to “learn” that difference and favor male candidates over female ones, missing the bias of the input. And managers may fail to detect the machine’s discrimination, thinking that an automated decision is an inherently neutral one, resulting in unfair hiring practices.
If you want to see how bias works, just run a query for “papa john pizza.” Google dutifully reports via its smart algorithm hits about Papa John’s founder getting evicted from his office, Papa John’s non admission of racial bias, and colleges cut ties to Papa John’s founder.” Google also provides locations and a a link to the a Twitter account. The result displayed for me this morning (July 16, 2018) at 940 am US Eastern was:
The only problem with my query “papa john pizza” is that I wanted the copycat recipe at this link. Google’s algorithm made certain that I would know about the alleged dust up among and within the pizza empire and that I could navigate to a store in Louisville. The smart software made it quite difficult for me to locate the knock off information. Sure, I could have provided Google with more clues to what I wanted like Six Sisters, the word “copycat”, the word “recipe”, and the word “ingredient.” But that’s what smart software is supposed to render obsolete. Boolean has no role in what algorithms expose to users. That’s why results are often interesting. That’s why smart software delivers off kilter results. The intent is to be useful. Often smart software is anything but.
Are the Google results biased? If I were Papa John, it is possible to take umbrage at the three headlines about bias.
Algorithms, if the write up is correct, will ameliorate this type of smart software dysfunctionality.
The article explains:
In a new paper published in the Proceedings of the 35th Conference on Machine Learning, SFI Postdoctoral Fellow Hajime Shimao and Junpei Komiyama, a research associate at the University of Tokyo, offer a way to ensure fairness in machine learning. They’ve devised an algorithm that imposes a fairness constraint that prevents bias.
The developers is quoted as saying:
“So say the credit card approval rate of black and white [customers] cannot differ more than 20 percent. With this kind of constraint, our algorithm can take that and give the best prediction of satisfying the constraint,” Shimao says. “If you want the difference of 20 percent, tell that to our machine, and our machine can satisfy that constraint.”
Just one question: What if a system incorporates two or more fairness algorithms?
Perhaps a meta fairness algorithm will herd the wandering sheep? Georg Cantor was troubled with this infinity of infinities type issues.
Fairness may be in the eye of the beholder. The statue of justice wears a blindfold, not old people magnifiers. Algorithms? You decide. Why not order a pizza or make your own clone of a Papa John pizza if you can find the recipe. Pizza and algorithms to verify algorithms. Sounds tasty.
If I think about algorithms identifying fake news, I may need to order maximum strength Pepcid and receive many, many smart advertisements from Amazon.
Stephen E Arnold, July 16, 2018
Comments
2 Responses to “An Algorithm for Fairness and Bias Checking”
[…] Train Ai to Follow Code of Ethics.” The need to program a code of conduct into AI systems has become clear, but finding a method to do so has proven problematic. Efforts to devise rules and teach them […]
Great post.Thank you very much for sharing