AI Speed Bumps Needed

June 6, 2018

The most far-reaching problem with AI is its potential for machine learning to pick up, and implement, the wrong lessons. Technology Review draws our attention to a new service that tests algorithms for bias in, “This Company Audits Algorithms to See How Biased They Are.” Founded by Cathy O’Neal, the mathematician and social scientist behind the book Weapons of Math Destruction, the small company is called O’Neil Risk Consulting and Algorithmic Auditing (ORCAA). In analyzing an algorithm for fairness, the company considers many factors, from the programmers themselves to the data generated. O’Neil offers these assessments as a way for companies to certify their algorithms bias-free, which she suggests makes for a strong marketing tool.

Meanwhile, the Electronic Frontier Foundation warns we are already becoming too reliant on AI in its post, “Math Can’t Solve Everything: Questions We Need to Be Asking Before Deciding an Algorithm is the Answer.” In introducing their list, Staff Attorney Jamie Lee Williams and Product Manager Lena Gunn emphasize:

“Across the globe, algorithms are quietly but increasingly being relied upon to make important decisions that impact our lives. This includes determining the number of hours of in-home medical care patients will receive, whether a child is so at risk that child protective services should investigate, if a teacher adds value to a classroom or should be fired, and whether or not someone should continue receiving welfare benefits. The use of algorithmic decision-making is typically well-intentioned, but it can result in serious unintended consequences. In the hype of trying to figure out if and how they can use an algorithm, organizations often skip over one of the most important questions: will the introduction of the algorithm reduce or reinforce inequity in the system?”

The article urges organizations to take these five questions into account: Will this algorithm influence decisions with the potential to negatively impact people’s lives? Can the available data actually lead to a good outcome? Is the algorithm fair? How will the results (really) be used by humans? And, will people affected by these decisions have any influence over the system? For each entry, the post explains why, and how, to employ each of these questions, complete with examples of AI bias that have occurred already. It all comes down to this—as Williams and Gunn write, “We must not use algorithms to avoid making difficult policy decisions or to shirk our responsibility to care for one another.”

Cynthia Murrell, June 6, 2018

Comments

Comments are closed.

  • Archives

  • Recent Posts

  • Meta