Algorithmic Recommendations and Real Journalists: Volatile Combination

September 22, 2017

I love the excitement everyone has for mathy solutions to certain problems. Sure, the math works. What is tough for some to grasp is that probabilities are different from driving one’s automobile into a mine drainage ditch. Fancy math used to figure out who likes what via clustering or mixing previous choices with information about what “similar” people purchased is different. The car is in the slime: Yes or no. The recommendation is correct: Well, somewhere between 70 percent and 85 percent most of the time.

That’s a meaningful difference.

I thought about the “car in the slime” example when I read “Anatomy of a Moral Panic”. The write up states:

The idea that these ball bearings are being sold for shrapnel is a reporter’s fantasy. There is no conceivable world in which enough bomb-making equipment is being sold on Amazon to train an algorithm to make this recommendation.

Excellent point.

However, the issue is that many people, not just “real” journalists, overlook the fact that a probability is not the same as the car in the slime. As smart software becomes the lazy person’s way to get information, it is useful to recall that some individuals confuse the outputs of a statistical numerical recipe with reality.

I find this larger issue a bit more frightening than the fact that recommendation engines spit out guesses about what is similar and the humans who misunderstand.

Stephen E Arnold, September 22, 2017


Comments are closed.

  • Archives

  • Recent Posts

  • Meta