The Case for Algorithmic Equity

September 20, 2016

We know that AI algorithms are skewed by the biases of both their creators and, depending on the application, their users. Social activist Cathy O’Neil addresses the broad consequences to society in her book, Weapons of Math Destruction. Time covers her views in its article, “This Mathematician Says Big Data is Causing a ‘Silent Financial Crisis’.” O’Neil studied mathematics at Harvard, utilized quantitative trading at a hedge-fund, and introduced a targeted-advertising startup. It is fair to say she knows what she is talking about.

More and more businesses and organizations rely on algorithms to make decisions that have big impacts on people’s lives: choices about employment, financial matters, scholarship awards, and where to deploy police officers, for example. Yet, the processes are shrouded in secrecy, and lawmakers are nowhere close to being on top of the issue. There is currently no way to ensure these decisions are anything approaching fair. In fact, the algorithms can create a sort of feedback loop of disadvantage. Reporter Rana Foroohar writes:

Using her deep technical understanding of modeling, she shows how the algorithms used to, say, rank teacher performance are based on exactly the sort of shallow and volatile type of data sets that informed those faulty mortgage models in the run up to 2008. Her work makes particularly disturbing points about how being on the wrong side of an algorithmic decision can snowball in incredibly destructive ways—a young black man, for example, who lives in an area targeted by crime fighting algorithms that add more police to his neighborhood because of higher violent crime rates will necessarily be more likely to be targeted for any petty violation, which adds to a digital profile that could subsequently limit his credit, his job prospects, and so on. Yet neighborhoods more likely to commit white collar crime aren’t targeted in this way.

Yes, unsurprisingly, it is the underprivileged who bear the brunt of algorithmic aftermath; the above is just one example. The write-up continues:

Indeed, O’Neil writes that WMDs [Weapons of Math Destruction] punish the poor especially, since ‘they are engineered to evaluate large numbers of people. They specialize in bulk. They are cheap. That’s part of their appeal.’ Whereas the poor engage more with faceless educators and employers, ‘the wealthy, by contrast, often benefit from personal input. A white-shoe law firm or an exclusive prep school will lean far more on recommendations and face-to-face interviews than a fast-food chain or a cash-strapped urban school district. The privileged… are processed more by people, the masses by machines.

So, algorithms add to the disparity between how the wealthy and the poor experience life. Compounding the problem, algorithms also allow the wealthy to isolate themselves online as well as in real life, through curated news and advertising that make it ever easier to deny that poverty is even a problem. See the article for its more thorough discussion.

What does O’Neil suggest we do about this? First, she proposes a “Hippocratic Oath for mathematicians.” She also joins the calls for much more thorough regulation of the AI field and to update existing civic-rights laws to include algorithm-based decisions. Such measures will require the cooperation of legislators, who, as a group, are hardly known for their technical understanding. It is up to those of us who do comprehend the issues to inform them action must be taken. Sooner rather than later, please.

Cynthia Murrell, September 20, 2016
Sponsored by ArnoldIT.com, publisher of the CyberOSINT monograph
There is a Louisville, Kentucky Hidden Web/Dark Web meet up on September 27, 2016.
Information is at this link: https://www.meetup.com/Louisville-Hidden-Dark-Web-Meetup/events/233599645/

 

Comments

One Response to “The Case for Algorithmic Equity”

  1. NormN on September 22nd, 2016 1:20 pm

    I’ve read several of your informative articles through the Public Intelligence Blog edited by Robert Steele.

    As long as we don’t elevate any particular algorithm to god-status I’m confident we’ll be OK.

    First, let me qualify myself: I have yet to study any math behind even a simple algorithm.

    I’m going to hypothetically equate algorithms to computers as behavior is to humans. For example, in racism, it isn’t the stereotypes (algorithms?) that create the biases that cause social problems. It is the convictions that do. By definition, stereotypes are flexible/changing, tolerable, even useful, imperfections in evaluating situations and people.

    When we get committed or comfortable long term with a stereotype, it can become an inflexible, possibly pernicious, conviction.

    In your example of a high-crime, poverty-stricken area, as unfair as the applied algorithm maybe, that environment will change. Then it becomes a question of the frequencies of updates and if new algorithms are used or not. Whether the result is a flexible stereotype or not.

    In psycho-cybernetics, the author points out that cybernetics recognizes that mistakes are a natural and normal part of life, as is the continuous correcting of these errors. To have the conviction that mistakes are bad, evil things that must be avoided at almost any cost is what I agree is causing alot of unnatural problems in civilized society.

  • Archives

  • Recent Posts

  • Meta