Auditing Algorithms: A Semi-Tough Task

October 22, 2021

Many years ago, I did a project for a large outfit. The goal was to look at a project and figure out why it was a flop. I assembled an okay team, which beavered away. The end result was that a number of small things went wrong. Each added some friction to what on the surface seemed a doable project. The “small things” added friction and the process went nowhere.

I thought about this after I read “Twitter’s Own Research Show That It’s a Megaphone for the Right. But It’s Complicated.

I circled this statement from the article:

We can see that it is happening. We are not entirely sure why it is happening. To be clear, some of it could be user-driven, people’s actions on the platform, we are not sure what it is.

Now back to failure. Humans expect a specific construct to work in a certain way. When it doesn’t humans either embrace root cause analysis or just shrug their shoulders and move on.

Several questions:

  • If those closest to a numerical recipe are not sure what’s causing the unexpected outcome, how will third party algorithm auditors figure out what is happening?
  • Engineering failures like using a material which cannot tolerate a particular amount of stress are relatively easy to figure out. Social media “smart” algorithms may be a more difficult challenge. What tools are available to deal with this engineering failure analysis? Do they work or are they too unabled to look at a result and pinpoint one or more points of inappropriate performance?
  • When humans and social media interact with complex algorithmic systems, do researchers have the meta-models capable of identifying the cause of failures or performance factors resulting from tiny operations in the collective system?

My hunch is that something new exists to be studied. Was Timnit Gebru, the former Google engineer, on the right track?

Stephen E Arnold, October 22, 2021

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta