Top Papers in Data Mining: Some Concern about Possibly Flawed Outputs
January 12, 2015
If you are a fan of “knowledge,” you probably follow the information provided by www.KDNuggets.com. I read “Research Leaders on Data Science and big Data Key Trends, Top Papers.” The information is quite interesting. I did note that the paper was kicked off with this statement:
As for the papers, we found that many researchers were so busy that they did not really have the time to read many papers by others. Of course, top researchers learn about works of others from personal interactions, including conferences and meetings, but we hope that professors have enough students who do read the papers and summarize the important ones for them!
Okay, everyone is really busy.
In the 13 experts cited, I noted that there were two papers that seemed to call attention to the issue of accuracy. These were:
“Preventing False Discovery in Interactive Data Analysis is Hard,” Moritz Hardt and Jonathan Ullman
“Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images,” Anh Nguyen, Jason Yosinski, Jeff Clune.
A related paper noted in the article is “Intriguing Properties of Neural Networks,” by Christian Szegdy, et al. The KDNuggets’ comment states:
It found that for every correctly classified image, one can generate an “adversarial”, visually indistinguishable image that will be misclassified. This suggests potential deep flaws in all neural networks, including possibly a human brain.
My take away is that automation is coming down the pike. Accuracy could get hit by a speeding output.
Stephen E Arnold, January 12, 2015
Comments
2 Responses to “Top Papers in Data Mining: Some Concern about Possibly Flawed Outputs”
Thanks for the links and comments to our article !
I don’t even know how I stopped up here, however I believed this post was good. I don’t understand who you might be but certainly you’re going to a well-known blogger if you are not already 😉 Cheers!