False Positives: The New Normal

January 1, 2019

And this is why so many people are wary of handing too much power to algorithms. TechDirt reports, “School Security Software Decided Innocent Parent Is Actually a Registered Sex Offender.” That said, it seems some common sense on the part of the humans involved would have prevented the unwarranted humiliation. The mismatch took place at an Aurora, Colorado, middle school event, where parent Larry Mitchell presumably just wanted to support his son. When office staff scanned his license, however, the Raptor system flagged him as a potential offender. Reporter Tim Cushing writes:

“Not only did these stats [exact name and date of birth] not match, but the photos of registered sex offenders with the same name looked nothing like Larry Mitchell. The journalists covering the story ran Mitchell’s info through the same databases — including Mitchell’s birth name (he was adopted) — and found zero matches. What it did find was a 62-year-old white sex offender who also sported the alias ‘Jesus Christ,’ and a black man roughly the same age as the Mitchell, who is white. School administration has little to say about this botched security effort, other than policies and protocols were followed. But if so, school personnel need better training… or maybe at least an eye check. Raptor, which provides the security system used to misidentify Mitchell, says photo-matching is a key step in the vetting process….

We also noted:

“Even if you move past the glaring mismatch in photos (the photos returned in the Sentinel’s search of Raptor’s system are embedded in the article), neither the school nor Raptor can explain how Raptor’s system returned results that can’t be duplicated by journalists.”

This looks like a mobile version of the PEBCAK error, and such mistakes will only increase as these verification systems continue to be implemented at schools and other facilities across the country. Cushing rightly points to this problem as “an indictment of the security-over-sanity thinking.” Raptor, a private company, is happy to tout its great success at keeping registered offenders out of schools, but they do not reveal how often their false positives have ruined an innocent family’s evening, or worse. How much control is our society willing to hand over to AIs (and those who program them)?

Cynthia Murrell, January 1, 2018

Will Algorithms Become a Dying Language?

December 30, 2018

It may sound insane, considering how much of our daily life revolves around algorithms. From your work, to your online shopping, to the maps that guide you on vacation, we depend on these codes. However, some engineers fear older algorithms will be lost to the sands of time and future generations will not be able to learn from there. Thankfully, a solution has arrived in the form of The Algorithm Archive.

According to its mission statement:

“The Arcane Algorithm Archive is a collaborative effort to create a guide for all important algorithms in all languages. This goal is obviously too ambitious for a book of any size, but it is a great project to learn from and work on and will hopefully become an incredible resource for programmers in the future.”

A program like this is so important. Maybe the place that has the most to learn from this long evolution of algorithms are those public government agencies. Some writers think many of these agencies have no idea what is in their algorithms, let alone how much they have to do with major policy decisions. Hindsight is truly 20/20.

Patrick Roland, December 30, 2018

Who Is a Low Risk Hire?

November 21, 2018

Last week, a person who did some contract work for me a year ago asked me if I would provide a reference. I agreed. I assumed that a caring, thoughtful human resources professional would speak with me on the telephone. Wrong. I received a text message asking me if I would complete questions. Get this. Each text message would contain a question about the person who sought a reference. After I hit, send, I would receive another text message.

Wrong.

I was then sent a link to an online form that assured me my information was confidential. “Https” was not part of this outfit’s game plan. I worked through a form, providing scores from one to seven about the person. The fact that I hired this person to perform a specific job for me was evidence that the individual could be trusted. I am not making chopped liver or cranking out greeting cards. We produce training information for law enforcement and intelligence professionals.

I worked through the questions which struck me as worrying more about appearing to be interested in the individual than actually obtaining concrete information about the person. Here’s an example of what the online test reveals:

image

Yeah, pretty much useless. I am not sure what “adaptability” means. I tell contractors what I want. The successful contractor does that task and gets paid. A contractor who does not gets cut out of the pool. This means in politically incorrect speak: Gets fired.

I read “Public Attitudes Toward Computer Algorithms” a couple of days after going through this odd ball way to get information about a person working on law enforcement and intelligence related work. The write up makes clear that other people are not keen on the use of opaque methods to figure out if a person can do good work and be trusted.

Well, gentle reader, get used to this.

Human resources want to cover their precious mortgage, make a car payment, or buy a new gizmo at the Amazon online store. The HR professionals are not eager to be responsible for screening individuals and figuring out what questions to ask a person like me. For good reason, I am not sure I would spend more than two minutes on the phone with an actual HR person. For the last 30 years, I have worked as an independent consultant. My only interactions with HR are limited to my suggesting that the individual stay away from me. Fill out forms or something. Just leave me alone, or you will be talking to individuals whom I pay to make you go away. I have a Mensa paralegal who can tie almost anyone in knots.

Several observations:

  1. Algorithms for hiring are a big, big thing. Why? Tail covering and document trails that say, “See, I did everything I could required by applicable regulations.” Forget judgment.
  2. The online angle is cheaper than having an actual old fashioned HR department. Outsource benefit reduction. Outsource candidate screening. Heck, outsource the outsourcing.
  3. No one wants to be responsible— for anything. Look at the high school science club management methods at Facebook. The founder is at war. Former employees explain that no one gave direction. Yada yada.
  4. The use of algorithms presumably leads to efficiencies; that is, lower costs, better, faster, cheaper, MBA and bean counter fits of joy.

Just as Apple’s Tim Cook sees nothing objectionable about taking Google’s money as Apple talks up its privacy / security commitment, algorithms make everything — including HR — much better.

Net net: I am glad I am old and officially cranking along at 75, not a hapless 22 year old trying to get a job and do a good job at a zippy de doo dah company.

Stephen E Arnold, November 21, 2018

Amazon Rekognition: Great but…

November 9, 2018

I have been following the Amazon response to employee demands to cut off the US government. Put that facial recognition technology on “ice.” The issue is an intriguing one; for example, Rekognition plugs into DeepLens. DeepLens connects with Sagemaker. The construct allows some interesting policeware functions. Ah, you didn’t know that? Some info is available if you view the October 30 and November 6, 2018, DarkCyber. Want more info? Write benkent2020 at yahoo dot com.

Image result for facial recognition crowd

How realistic is 99 percent accuracy? Pretty realistic when one has one image and a bounded data set against which to compare a single image of of adequate resolution and sharpness.

What caught my attention was the “real” news in “Amazon Told Employees It Would Continue to Sell Facial Recognition Software to Law Enforcement.” I am less concerned about the sales to the US government. I was drawn to these verbal perception shifters:

  • under fire. [Amazon is taking flak from its employees who don’t want Amazon technology used by LE and similar services.]
  • track human beings [The assumption is tracking is bad until the bad actor tracked is trying to kidnap your child, then tracking is wonderful. This is the worse type of situational reasoning.]
  • send them back into potentially dangerous environments overseas. [Are Central and South America overseas, gentle reader?]

These are hot buttons.

But I circled in pink this phrase:

Rekognition is research proving the system is deeply flawed, both in terms of accuracy and regarding inherent racial bias.

Well, what does one make of the statement that Rekognition is powerful but has fatal flaws?

Want proof that Rekognition is something more closely associated with Big Lots than Amazon Prime? The write up states:

The American Civil Liberties Union tested Rekognition over the summer and found that the system falsely identified 28 members of Congress from a database of 25,000 mug shots. (Amazon pushed back against the ACLU’s findings in its study, with Matt Wood, its general manager of deep learning and AI, saying in a blog post back in July that the data from its test with the Rekognition API was generated with an 80 percent confidence rate, far below the 99 percent confidence rate it recommends for law enforcement matches.)

Yeah, 99 percent confidence. Think about that. Pretty reasonable, right? Unfortunately 99 percent is like believing in the tooth fairy, just in terms of a US government spec or Statement of Work. Reality for the vast majority of policeware systems is in the 75 to 85 percent range. Pretty good in my book because these are achievable accuracy percentages. The 99 percent stuff is window dressing and will be for years to come.

Also, Amazon, the Verge points out, is not going to let folks tinker with the Rekognition system to determine how accurate it really is. I learned:

The company has also declined to participate in a comprehensive study of algorithmic bias run by the National Institute of Standards and Technology that seeks to identify when racial and gender bias may be influencing a facial recognition algorithm’s error rate.

Yep, how about those TREC accuracy reports?

My take on this write up is that Amazon is now in the sites of the “real” journalists.

Perhaps the Verge would like Amazon to pull out of the JEDI procurement?

Great idea for some folks.

Perhaps the Verge will dig into the other components of Rekognition and then plot the improvements in accuracy when certain types of data sets are used in the analysis.

Facial recognition is not the whole cloth. Rekognition is one technology thread which needs a context that moves beyond charged language and accuracy rates which are in line with those of other advanced systems.

Amazon’s strength is not facial recognition. The company has assembled a policeware construct. That’s news.

Stephen E Arnold, November 9, 2018

Analytics: From Predictions to Prescriptions

October 19, 2018

I read an interesting essay originating at SAP. The article’s title: “The Path from Predictive to Prescriptive Analytics.” The idea is that outputs from a system can be used to understand data. Outputs can also be used to make “predictions”; that is, guesses or bets on likely outcomes in the future. Prescriptive analytics means that the systems tell or wire actions into an output. Now the output can be read by a human, but I think the key use case will be taking the prescriptive outputs and feeding them into other software systems. In short, the system decides and does. No humans really need be involved.

The write up states:

There is a natural progression towards advanced analytics – it is a journey that does not have to be on separate deployments. In fact, it is enhanced by having it on the same deployment, and embedding it in a platform that brings together data visualization, planning, insight, and steering/oversight functions.

What is the optimal way to manage systems which are dictating actions or just automatically taking actions?

The answer is, quite surprisingly, a bit of MBA consultantese: Governance.

The most obvious challenge with regards to prescriptive analytics is governance.

Several observations:

  • Governance is unlikely to provide the controls which prescriptive systems warrant. Evidence is that “governance” in some high technology outfits is in short supply.
  • Enhanced automation will pull prescriptive analytics into wide use. The reasons are one you have heard before: Better, faster, cheaper.
  • Outfits like the Google and In-Q-Tel funded Recorded Future and DarkTrace may have to prepare for new competition; for example, firms which specialize in prescription, not prediction.

To sum up, interesting write up. perhaps SAP will be the go to player in plugging prescriptive functions into their software systems?

Stephen E Arnold, October 19, 2018

Free Data Sources

October 19, 2018

We were plowing through our research folder for Beyond Search. We overlooked the article “685 Outstanding Free Data Sources For 2017.” If you need a range of data sources related to such topics as government data, machine learning, and algorithms, you might want to bookmark this listing.

Stephen E Arnold, October 19, 2018

Algorithms Are Neutral. Well, Sort of Objective Maybe?

October 12, 2018

I read “Amazon Trained a Sexism-Fighting, Resume-Screening AI with Sexist Hiring data, So the Bot Became Sexist.” The main point is that if the training data are biased, the smart software will be biased.

No kidding.

The write up points out:

There is a “machine learning is hard” angle to this: while the flawed outcomes from the flawed training data was totally predictable, the system’s self-generated discriminatory criteria were surprising and unpredictable. No one told it to downrank resumes containing “women’s” — it arrived at that conclusion on its own, by noticing that this was a word that rarely appeared on the resumes of previous Amazon hires.

Now the company discovering that its smart software became automatically biased was Amazon.

That’s right.

The same Amazon which has invested significant resources in its SageMaker machine learning platform. This is part of the infrastructure which will, Amazon hopes, will propel the US Department of Defense forward for the next five years.

Hold on.

What happens if the system and method produces wonky outputs when a minor dust up is automatically escalated?

Discriminating in hiring is one thing. Fluffing a global matter is a another.

Do the smart software systems from Google, IBM, and Microsoft have similar tendencies? My recollection is that this type of “getting lost” has surfaced before. Maybe those innovators pushing narrowly scoped rule based systems were on to something?

Stephen E Arnold, October 11, 2018

Smart Software: There Are Only a Few Algorithms

September 27, 2018

I love simplicity. The write up “The Algorithms That Are Currently Fueling the Deep Learning Revolution” certainly makes deep learning much simpler. Hey, learn these methods and you too can fire up your laptop and chop Big Data down to size. Put digital data into the digital juicer and extract wisdom.

Ah, simplicity.

The write up explains that there are four algorithms that make deep learning tick. I like this approach because it does not require one to know that “deep learning” means. That’s a plus.

The algorithms are:

  • Back propagation
  • Deep Q Learning
  • Generative adversarial network
  • Long short term memory

Are these algorithms or are these suitcase words?

The view from Harrod’s Creek is that once one looks closely at these phrase one will discover multiple procedures, systems and methods, and math slightly more complex than tapping the calculator on one’s iPhone to get a sum. There is, of course, the issue of data validation, bandwidth, computational resources, and a couple of other no-big-deal things.

Be a deep learning expert. Easy. Just four algorithms.

Stephen E Arnold,  September 27, 2018

IBM Embraces Blockchain for Banking: Is Amazon in the Game Too?

September 9, 2018

IBM recently announced the creation of LedgerConnect, a Blockchain powered banking service. This is an interesting move for a company that previously seemed to waver on whether it wanted to associate with this technology most famous for its links to cryptocurrency. However, the pairing actually makes sense, as we discovered in a recent IT Pro Portal story, “IBM Reveals Support Blockchain App Store.”

According to an IBM official:

“On LedgerConnect financial institutions will be able to access services in areas such as, but not limited to, know your customer processes, sanctions screening, collateral management, derivatives post-trade processing and reconciliation and market data. By hosting these services on a single, enterprise-grade network, organizations can focus on business objectives rather than application development, enabling them to realize operational efficiencies and cost savings across asset classes.”

This, in addition, to recent news that some of the biggest banks on the planet are already using Blockchain for a variety of needs. This includes the story that the Agricultural Bank of China has started issuing large loans using the technology. In fact, out of the 26 publicly owned banks in China, nearly half are using Blockchain. IBM looks pretty conservative when you think of it like that, which is just where IBM likes to be.

Amazon supporst Ethereum, HyperLedger, and a host of other financial functions. For how long? Years.

Patrick Roland, September 9, 2018

Algorithms Can Be Interesting

September 8, 2018

Navigate to “As Germans Seek News, YouTube Delivers Far-Right Tirades” and consider the consequences of information shaping. I have highlighted a handful of statements from the write up to prime your critical thinking pump. Here goes.

I circled this statement in true blue:

…[a Berlin-based digital researcher] scraped YouTube databases for information on every Chemnitz-related video published this year. He found that the platform’s recommendation system consistently directed people toward extremist videos on the riots — then on to far-right videos on other subjects.

I noted:

[The researcher] found that the platform’s recommendation system consistently directed people toward extremist videos on the riots — then on to far-right videos on other subjects.

The write up said:

A YouTube spokeswoman declined to comment on the accusations, saying the recommendation system intended to “give people video suggestions that leave them satisfied.”

The newspaper story revealed:

Zeynep Tufekci, a prominent social media researcher at the University of North Carolina at Chapel Hill, has written that these findings suggest that YouTube could become “one of the most powerful radicalizing instruments of the 21st century.”

With additional exploration, the story asserts a possible mathematical idiosyncrasy:

… The YouTube recommendations bunched them all together, sending users through a vast, closed system composed heavily of misinformation and hate.

You may want to read the original write up and consider the implications of interesting numerical recipes’ behavior.

Next Page »

  • Archives

  • Recent Posts

  • Meta