Objectifying the Hiring Process: Human Judgment Must Be Shaped
February 18, 2021
The controversies about management-employee interactions are not efficient. Consider Google. Not only did the Timnit Gibru dust up sully the pristine, cheerful surface of the Google C-suite, the brilliance of the Google explanation moved the bar for high technology management acumen. Well, at least in terms of publicity it was a winner. Oh, the Gibru incident probably caught the attention of female experts in artificial intelligence. Other high technology and consumer of talent from high prestige universities paid attention as well.
What’s the fix for human intermediated personnel challenges? The answer is to get the humans out of the hiring process if possible. Software and algorithms, databases of performance data, and the jargon of psycho-babble are the path forward. If an employee requires termination, the root cause is an algorithm, not a human. So sue the math. Don’t sue the wizards in the executive suite.
These ideas formed in my mind when I read “The Computers Rejecting Your Job Application.” The idea is that individuals who want a real job with health care, a retirement program, and maybe a long tenure with a stable out” get interviewed via software. Decisions about hiring pivot on algorithms. Once the thresholds are crossed by a candidate, a human (who must take time out from a day filled with back to back Zoom meetings) will notify the applicant that he or she has a “real” job.
If something goes Gibru, the affected can point fingers at the company providing the algorithmic deciders. Damage can be contained. There’s a different throat to choke. What’s not to like?
The write up from the Beeb, a real news outfit banned in China, reports:
The questions, and your answers to them, are designed to evaluate several aspects of a jobseeker’s personality and intelligence, such as your risk tolerance and how quickly you respond to situations. Or as Pymetrics puts it, “to fairly and accurately measure cognitive and emotional attributes in only 25 minutes”.
Yes, online. Just 25 minutes. Forget those annoying interview days. Play a game. Get hired or not. Efficient. Logical.
Do online hiring and filtering systems work. The write up reminds the thumb typing reader about Amazon’s algorithmic hiring and filtering system:
In 2018 it was widely reported to have scrapped its own system, because it showed bias against female applicants. The Reuters news agency said that Amazon’s AI system had “taught itself that male candidates were preferable” because they more often had greater tech industry experience on their resume. Amazon declined to comment at the time.
From my vantage point, it seems as if these algorithmic hiring vendors are selling their services. That’s great until one of the customers takes the outfit to court.
Progress? Absolutely.
Stephen E Arnold, February 17, 2021
AI Success: A Shocking Omission?
February 11, 2021
I read “Here’s What All Successful AI Startups Have in Common.” The “all” troubles me. Plus when I received a copy of the CBInsights’ report, I did not read it. Sorry. Logo collections don’t do it for me.
I noted this statement in the “all” article:
I think “AI startup” is a misnomer when applied to many of the companies included in the CB Insights list because it puts too much focus on the AI side and too little on the other crucial aspects of the company. Successful companies start by addressing an overlooked or poorly solved problem with a sound product strategy. This gives them the minimum market penetration needed to establish their business model and gather data to gain insights, steer their product in the right direction, and train machine learning models. Finally, they use AI as a differentiating factor to solidify their position and maintain the edge over competitors. No matter how advanced, AI algorithms alone don’t make a successful startup nor a business strategy.
What was the shocking omission? The massive amount of jargon and marketing hoo hah each firm generates. I wonder, “Why that point was overlooked?” Oh, there is another common factor too: Reliance on the same small set of methods most AI firms share. Thank you, Reverend Bayes.
Stephen E Arnold, February 11, 2021
Useful No Cost Book about Algorithms
February 11, 2021
If you find math books interesting, you will want to take a look at Jeff Erikson’s Algorithms. The text complements courses the author conducted at the University of Illinois. I enjoyed the section called “Greedy Algorithms,” and there are other useful sections as well, including the discuss of search. The book contains illustrations and exercises. However, the reader will not find answers to these. This page provides links to other material developed for students by the author. The text consumes more than 450 pages. Very useful information.
Stephen E Arnold, February 11, 2021
Facial Recognition? Old Hat. Now It Is Behavioral Recognition
February 8, 2021
The possibilities of how AI will revolutionize robotics, medical technology, such as artificial limbs, and videogames are as endless as the imagination. Fujitsu moved AI technology one step closer to the imagination says the New Zealand IT Brief in: “Fujitsu Develops Behavioral Recognition Tech, Completes World First.”
Fujitsu Laboratories designed the world’s most accurate recognition of complex actions and behaviors based on skeleton data. Using deep learning algorithms, Fujitsu’s technology mapped all the positions and connections of complex joint behavior when multiple joints move in tandem. The technology earned the world’s highest accuracy score against the world’s standard benchmark in behavior recognition.
The technology was developed by:
“In general, human behavior recognition utilizing AI relies on temporal changes in the position of each of the skeletal joints, including in the hands, elbows, and shoulders, as identifying features, which are then linked to simple movement patterns such as standing or sitting.
With time series behavior-recognition technology developed by Fujitsu Labs, Fujitsu has successfully realized highly-accurate image recognition using a deep learning model that can operate with high-accuracy even for complex behaviors in which multiple joints change in conjunction with each other.
The new technology is based on a AI model that can be trained in advance using the time series data of joints.
The connection strength (weight) with neighboring joints can be optimized, and effective connection relationships for behavior recognition can be acquired, Fujitsu states.
With conventional technologies, it was necessary to accurately grasp the individual characteristics of each joint. With an AI model that has already been trained, the combined features of the adjacent joints that are linked can be extracted, making it possible to achieve highly-accurate recognition for complex movements, according to the company.”
Fujitsu wants to roll out this new system this year to make workplaces safer, but the true possibilities of the technology have yet to be explored.
Whitney Grace, February 8, 2021
8 Complexity Analysis Underscores a Fallacy in the Value of Mindless Analysis of Big Data
February 8, 2021
First, I want mention that in the last two days I have read essays which are like PowerPoint slide shows with animation and written text. Is this a new form of “writing”?
Now to the business of the essay and its mini movies: “What Is Complexity Science?” provides a run down of the different types of complexity which academics, big thinkers, number nerds, and wizard-type people have identified.
If you are not familiar with the buzzwords and how each type of complexity generates behaviors which are tough to predict in real life, read the paper which is on Microsoft Github.
Here’s the list:
- Interactions or jujujajaki networks. Think of a graph of social networks evolving in real time.
- Emergence. Stuff just happens when other stuff interact. Rioting crowds or social media memes.
- Dynamics. Think back to the pendulum your high school physics teacher tried to explain and got wrong.
- Forest fires. Visualize the LA wildfires.
- Adaptation. Remember your friend from college who went to prison. When he was released and hit the college reunion, he had not yet adjusted to life outside: Hunched, stood back to wall, put his left arm around his food, weird eye contact, etc.
The write up explains that figuring out what’s happening is difficult. Hence, mathematics. You know. Unreasonably effective at outputting useful results. (How about that 70 to 90 percent accuracy. Close enough for horse shoes? Except when the prediction is wrong. (Who has heard, “Sorry about the downside of chemotherapy, Ms. Smith. Your treatment failed and our data suggest it works in most cases.”)
Three observations:
- Complexity is like thinking and manipulating infinity. Georg Cantor illustrates what can happen when pondering the infinite.
- Predictive methods make a stab at making sense out of something which may be full of surprises. What’s important is not the 65 to 85 percent accuracy. The big deal is the 35 to 15 percent which remains — well — unpredictable due to complexity.
- Humans want certainty, acceptable risk, and possibly change on quite specific terms. Hope springs eternal for mathematicians who deliver information supporting this human need.
Complicated stuff complexity. Math works until it doesn’t. But now we have a Ramanujam Machine which can generate conjectures. Simple, right?
Stephen E Arnold, February 8, 2021
Facebook Algorithms: Pernicious, Careless, Indifferent, or No Big Deal?
February 4, 2021
What is good for the social media platform is not necessarily good for its users. Or society. The Startup examines the “Facebook AI Algorithm: One of the Most Destructive Technologies Ever Invented.” Facebook’s AI is marketed as a way to give users more of what they want to see and that it is—to a point. We suspect most users would like to avoid misinformation, but if it will keep eyeballs on the platform Facebook serves up fake news alongside (or instead of) reputable content. Its algorithms are designed to serve its interests, not ours. Considering Facebook has become the primary source of news in the U.S., this feature (not a bug) is now a real problem for society. Writer David Meerman Scott observes:
“The Facebook Artificial Intelligence-powered algorithm is designed to suck users into the content that interests them the most. The technology is tuned to serve up more and more of what you click on, be that yoga, camping, Manchester United, or K-pop. That sounds great, right? However, the Facebook algorithm also leads tens of millions of its 2.7 billion global users into an abyss of misinformation, a quagmire of lies, and a quicksand of conspiracy theories.”
As we have seen, such conspiracy theories can lead to dire real-world consequences. All because Facebook (and other social media platforms) lead users down personalized rabbit holes for increased ad revenue. Sites respond to criticism by banning some content, but the efforts are proving to be inadequate. Scott suggests the only real solution is to adjust the algorithms themselves to avoid displaying misinformation in the first place. Since this will mean losing money, though, Facebook is unlikely to do so without being forced to by regulators, advertisers, or its employees.
The Next Web looks at how these algorithms work in, “Here’s How AI Determines What You See on the Facebook News Feed.” Reporter Thomas Macaulay writes:
“The ranking system first collects candidate posts for each user, including those shared by their friends, Groups, or Pages since their last login. It then gives each post a score based on a variety of factors, such as who shared the content and how it matches with what the user generally interacts with. Next, a lightweight model narrows the pool of candidates down to a shortlist. This allows more powerful neural networks to give each remaining post a score that determines the order in which they’re placed. Finally, the system adds contextual features like diversity rules to ensure that the News Feed has a variety of content. The entire process is complete in the time it takes to open the Facebook app.”
Given recent events, it is crucial Facebook and other platforms modify their AI asap. What will it take?
Cynthia Murrell, February 4, 2021
Deep Fakes Are Old
November 24, 2020
Better late than never, we suppose. The New York Post reports, “BBC Apologizes for Using Fake Bank Statements to Land Famous Princess Diana Interview.” Princess Diana being unavailable to receive the apology, the BBC apologized to her brother instead for luring her into the 1995 interview with counterfeit documentation. Writer Marisa Dellatto specifies:
“Network director-general Tim Davie wrote to Diana’s brother, Charles Spencer, to acknowledge the fraudulent actions of reporter Martin Bashir 25 years ago. Last month, the BBC finally admitted that Bashir showed Spencer bank statements doctored by a staff graphic designer. Spencer had alleged that Bashir told his sister ‘fantastical stories to win her trust’ and showed him fake bank records which reportedly helped land Bashir the interview. At the time, the princess was apparently deeply worried she was being spied on and that her staff was leaking information about her. Bashir’s ‘evidence’ allegedly made her confident to do the interview, one year after she and [Prince] Charles split.”
This is the interview in which Princess Di famously remarked that “there were three of us in this marriage, so it was a bit crowded,” and the couple filed for divorce in the weeks that followed. (For those who were not around or old enough to follow the story, her statement was a reference to Prince Charles’ ongoing relationship with Camila Parker Bowles, whom he subsequently married.)
For what it is worth, a BBC spokesperson insists this sort of deception would not pass the organization’s more stringent editorial processes now in place. Apparently, Bashir also intimidated the Princess with fake claims her phones had been tapped by the British Intelligence Service. Though it did issue the apology, the BBC does not plan to press the issue further because Bashir is now in poor health.
Cynthia Murrell, November 24, 2020
Linear Math Textbook: For Class Room Use or Individual Study
October 30, 2020
Jim Hefferon’s Linear Algebra is a math textbook. You can get it for free by navigating to this page. From Mr. Hefferon’s Web page for the book, you can download a copy and access a range of supplementary materials. These include:
- Classroom slides
- Exercise sets
- A “lab” manual which requires Sage
- Video.
The book is designed for students who have completed one semester of calculus. Remember: Linear algebra is useful for poking around in search or neutralizing drones. Zaap. Highly recommended.
Stephen E Arnold, October 30, 2020
A Googley Book for the Google-Aspiring Person
October 29, 2020
Another free book? Yep, and it comes from the IBM-centric and Epstein-allied Massachusetts Institute of Technology. The other entity providing conceptual support is the Google, the online advertising company. MIT is an elite generator. Google is a lawsuit attractor. You will, however, look in vain through the 1,000 page volume for explanations of the numerical theorems explaining the amplification of value when generators and attractors interact.
The book, published in 2017, is “Mathematics for Computer Science.” The authors are a Googler named Eric Lehman, the MIT professors F Thomas Leighton and Albert R Meyer, and possibly a number of graduate students who work helped inform the the content.
The books numerical recipes, procedures, and explanations fall into five categories:
- Proofs, you know, that’s Googley truth stuff to skeptical colleagues who don’t want to be in a meat space or a virtual meeting
- Structures. These are the nuts and bolts of being able to solve problems the Googley way
- Counting. Addition and such on steroids
- Probability. This is the reality of the Google. And you thought Robinhood was the manifestation of winning a game. Ho ho ho.
- Recurrences. Revisiting the Towers of Hanoi. This is a walk down memory lane.
You can download your copy at this link. Will the MIT Press crank out 50,000 copies for those who lack access to an industrial strength laser printer?
Another IBM infusion of cash may be need to make that happen. Mr. Epstein is no longer able to contribute money to the fascinating MIT. What’s the catch? Perhaps that will be a question on a reader’s Google interview?
Stephen E Arnold, October 29, 2020
The Bulldozer: Driver Accused of Reckless Driving
October 28, 2020
I don’t know if the story in the Sydney Morning Herald is true. You, as I did, will have to work through the “real” news report about Amazon’s commitment to its small sellers. With rumors of Jeff Bezos checking out the parking lots at CNN facilities, it is difficult to know where the big machine’s driver will steer the online bookstore next. Just navigate to “Ruined My Life: After Going All In on Amazon, a Merchant Says He Lost Everything.” The hook for the story is that a small online seller learned that Amazon asserted his product inventory was comprised of knock offs, what someone told me was a “fabulous fake.” Amazon wants to sell “real” products made by “real” companies with rights to the “real” product. A Rolex on Amazon, therefore, is “real,” unlike the fine devices available at the Paris street market Les Puces de Saint-Ouen.
What happened?
The Bezos bulldozer allegedly ground the inventory of the small merchant into recyclable materials. The write up explains in objective, actual factual “real” news rhetoric:
Stories like his [the small merchant with zero products and income] have swirled for years in online merchant forums and conferences. Amazon can suspend sellers at any time for any reason, cutting off their livelihoods and freezing their money for weeks or months. The merchants must navigate a largely automated, guilty-until-proven-innocent process in which Amazon serves as judge and jury. Their emails and calls can go unanswered, or Amazon’s replies are incomprehensible, making sellers suspect they’re at the mercy of algorithms with little human oversight.
Yikes, algorithms. What did those savvy math wonks do to alleged knock offs? What about the kidney transplant algorithms? Wait, that’s a different algorithm.
The small merchant was caught in the bulldozer’s blade. The write up explains:
Hoping to have his [the small merchant again] account reinstated and continue selling on the site, Govani [the small merchant] put off the decision. He received a total of 11 emails from Amazon each giving him different dates at which time his inventory would be destroyed if he hadn’t removed it. He sought clarity from Amazon about the conflicting dates. When he tried to submit an inventory removal order through Amazon’s web portal, it wouldn’t let him.
What’s happening now?
The small merchant is couch surfing and trying to figure out what’s next. One hopes that the Bezos bulldozer will not back over the small merchant. Taking Amazon to court is an option. There is the possibility of binding arbitration.
But it may be difficult to predict what the driver of the Bezos bulldozer will do. What’s a small merchant when the mission is larger. In the absence of meaningful regulation and a functioning compass on the big machine, maybe that renovation of CNN is more interesting than third party sellers? The Bezos bulldozer is a giant device with many moving parts. Can those driving it know what’s going on beneath the crawler treads? Is it break time yet?
Stephen E Arnold, October 28, 2020